text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
black quartz countertops
In Euclidean geometry, an equiangular polygon is a polygon whose vertex angles are equal. A triangle with all angles equal (they are all 60°) All sides are also equal. Equilateral and Equiangular Three-Sided Polygons (Triangles) Because of how rigid and structured a triangle is, all equilateral triangles are also equiangular. An equilateral triangle is also an equiangular triangle since all its angles are equal. An equilateral triangle is a triangle in which all three sides are equal. Proving a equilateral triangle is equiangular. An equilateral triangle is also called an equiangular triangle since its three angles are equal to 60°. The only equiangular triangle is the equilateral triangle. are equal in measure. All the angles of an equilateral triangle are equal to 60∘ 60 ∘. The perimeter is $ p=3a $ These formulas can be derived using the Pythagorean theorem. Equiangular shapes are shapes in which all the interior angles are equal. The angles of a Euclidean equiangular triangle each measure 60°. each angle is always a third of that, or 60°. A polygon is a closed plane figure with three or more sides that are all straight. An equilateral triangle is the most symmetrical triangle, having 3 lines of reflection and rotational symmetry of order 3 about its center.Its symmetry group is the dihedral group of order 6 $ D_3 $. As another example, the sides opposite the base angles of an isosceles triangle have sides that are equal because the base angles are equal . Isosceles & Equilateral Triangle Problems This video covers how to do non-proof problems involving the Isosceles Triangle Theorem, its converse and corollaries, as well as the rules around equilateral and equiangular triangles. Viewed 74 times 0 $\begingroup$ How do you prove a triangle is equiangular with 5 steps? The area of an equiangular triangle can be calculated in the, For an equiangular triangle, the radius of the. An equilateral triangle is always equiangular (see below). Geometric Figures: Definition and Examples of Flat and Solid Figures, Angles: Definition, Elements and Examples. Polygons. F1: $$ P = 3 \cdot a $$ a = side . In the figure shown the height BH measures √3m. To calculate missing value in equilateral triangle, based on one known value, you need to remember just three formulas. 0 0. equilateral triangle. A right triangle has one 90° angle and a variety of often-studied topics: Pythagorean … And thus each has to be 60 degree. Equiangular and Equilateral triangles are two different types of triangles with different properties. Take a Study Break. Step 2: Detailed Directions First, construct a triangle as indicated by your choice in step 1 on a coordinate plane. Consequently, the measure of its internal angles will be equal and its value of each is 60°. The equilateral triangle is also defined as that regular polygon of three sides and equiangular at the same time (same angles). Proofs concerning equilateral triangles (video) | Khan Academy C. A right triangle cannot be equilateral. If a triangle is equiangular, then it is _____. base angles. An equiangular triangle has three equal sides, and it is the same as an equilateral triangle. equiangular synonyms, equiangular pronunciation, equiangular translation, English dictionary definition of equiangular. See figure: When any notable line is drawn: Angle Bisector, Altitude, Median and Perpendicular Bisector in an equilateral triangle, these divide the equilateral triangle into two congruent right triangles. The same as an Equilateral Triangle. The size of the angle does not depend on the length of the side. equiangular triangle. 1. equiangular triangle - a three-sided regular polygon. Calculate the perimeter and area of the equilateral triangle region ABC. What does a equiangular triangle look like? The area is $ A=\frac{\sqrt3}{4}a^2 $ 2. Find the perimeter of $\triangle ABC$. That means, all three internal angles are equal to each other and the only value possible is 60° each. I introduce and define equilateral triangles and equiangular triangles. In an equilateral triangle the notable lines: Median, Angle Bisector, Altitude and Perpendicular Bisector are equal in segment and length. triangle, trigon, trilateral - a three-sided polygon. Equilateral triangl… Guardar mi nombre, correo electrónico y web en este navegador para la próxima vez que comente. Note: In Euclidean geometry, all equiangular triangles are equilateral and vice-versa. This website uses cookies. Tu dirección de correo electrónico no será publicada. Previous section Problems Next section Problems. Then calculating the perimeter of the equilateral triangle will be easy, we only have to know its side and add it three times, which would be the same side multiplied by three, let's see: From the figure, the length of the side of the equilateral triangle is «a»: ⇒ Perimeter of equilateral triangle = a + a + a. Equilateral Equiangular Triangle Rotation to prove SAS Congruence After you have selected the one transformation you will be completing, go to step 2 for detailed directions. Visit our. All I know is that triangle abc is equilateral? In an equiangular triangle, the measure of each of its interior angles is 60 ̊. What is the size of the angle opposite that side? The Pythagorean theorem can be applied to any of these right triangles. Its three angles are also equal and they are each 60°. An equiangular triangle is a triangle where all three interior angles are equal in measure. An isosceles triangle has at least two equal sides, so an equilateral triangle is also an isosceles triangle.. An equilateral triangle has all three sides equal in length. A square is a special case of an equilateral quadrilateral which is also an equiangular shape. 128. An equilateral triangle is simply a specific case of … Example 1: An equilateral triangle has one side that measures 5 in. Because it also has the property that all three interior angles are equal, it really the same thing as an equiangular triangle. See: Triangle. It is a regular polygon with 3 sides. Los campos obligatorios están marcados con *. Solution: Step 1:Since it is an equilateral triangle all its angles would be 60°. Every equilateral triangle is also an isosceles triangle, so any two sides that are equal have equal opposite angles. Line l is contained in plane K and line m is contained in plane H. Example of equiangular triangle: The sides of an equiangular triangle are all the same length (congruent), and so an equiangular triangle is really E. A regular triangle is equilateral. Equiangular Triangle A triangle with three congruent angles. Note: In Euclidean geometry, all equiangular triangles are equilateral and vice-versa. interior angles Lv 7. We have the height of the equilateral triangle, then we apply formula: i) Calculation of the Perimeter: according to the theory the perimeter is equal: 3.a. Right Triangles. In geometry an equilateral pentagon is a polygon in the Euclidean plane with five sides of equal length.Its five vertex angles can take a range of sets of values, thus permitting it to form a family of pentagons. Equiangular - When all angles inside a polygon are the same, it is said to be equiangular. the same thing as an equilateral triangle. The following figure is … Pramod Kumar. 39 Related Question Answers Found Is circle a polygon? Answer:The size of the angle is 60°. equilateral triangle. Define equiangular. In an equilateral triangle the remarkable points: Centroid, Incentre, Circuncentre and Orthocentre coincide in the same «point» and it is fulfilled that the distance from said point to a vertex is double its distance to the base. A triangle with all sides congruent. B. adj. See Equilateral Triangles. An equiangular triangle is a triangle where all three Assuming the lengths of the sides of the equilateral triangle are $ a $, we can determine that: 1. Equiangular Triangle. Properties. Therefore, since all three sides of an equilateral triangle are equal, all three angles are equal, too. Active 1 year, 1 month ago. regular polygon - a polygon with all sides and all angles equal. Source(s): equiangular triangles equilateral: https://shortly.im/abVCn. An equilateral triangle has three congruent sides, and is also an equiangular triangle with three congruent angles that each meansure 60 degrees. Equilateral triangles also called equiangular. Hence, every equilateral triangle is also equiangular. Formulas for Equilateral Triangle. However, of all the types of triangles, the equilateral triangle is the best known and perhaps the most studied in schools because of its properties and applications. In geometry, three or more than three straight lines (or segment of a line) make a polygon and an equilateral polygon is a polygon which has all sides of the same length. isosceles triangle. An equilateral triangle is one in which all three sides are congruent (same length). The angles of a Euclidean equiangular triangle … From the given graph we first calculate the value of «a» (side of the triangle). 1 decade ago. In an equiangular triangle, the measure of each of its interior angles is 60 °. A triangle with three equal interior angles is called an equiangular triangle. Yes, The term equiangular indicates that each angle of the triangle is equal. We will deal with the main properties of an equilateral triangle, which will help us solve these types of problems. Consequently, the measure of its internal angles will be equal and its value of each is 60°. In the equilateral triangle ABC of side «a»: Since «h» is the height of the equilateral triangle, it can be calculated in relation to the side «a» and is: We present a series of equilateral triangle problems, solved step by step, where you will be able to appreciate how these types of triangle problems are solved. Rectangles, including the square, are the only equiangular quadrilaterals (four-sided figures).. For a convex equiangular n-gon each internal angle is 180(1-2/n)°; this is the equiangular polygon theorem.. Viviani's theorem holds for equiangular polygons:. If the lengths of the sides are also equal then it is a regular polygon.. An equilateral triangle is a triangle that has three sides of equal length. A triangle with three equal interior angles is called an equiangular triangle. A triangle with three congruent angles. In an isosceles triangle, the base angles are congruent. Finally, if all the sides of the triangle are equal, then the angles opposite those sides must also be equal. delta - an object shaped like an equilateral triangle. The sides of an equiangular triangle are all the same length (congruent), and so an equiangular triangle is really the same thing as an equilateral triangle. D. The hypotenuse of a right triangle must be longer than either leg. The perimeter of a triangle is defined as the sum of the lengths of the sides. American Heritage® Dictionary of the English Language, Fifth Edition. According to the types of triangles, the equilateral triangle belongs to the class: «according to its sides» as well as the isosceles triangle and scalene triangle. Because the interior angles of any triangle always add up to 180°, The area of an equilateral triangle (S) is calculated from the following figure: We know that the area of a triangle is ½(base x height). Ask Question Asked 1 year, 1 month ago. This is a equilateral or equiangular triangle! An equiangular triangle is isosceles, equilateral, and acute. An equiangular quadrilateral. is that equiangular is (geometry) of a polygon, having all interior angles equal this is not necessarily a regular polygon, since that would also be equilateral; a rectangle is equiangular but not equilateral, unless it is a square while equilateral is (geometry) referring to a polygon all of whose sides are of equal length not necessarily a regular polygon since the angles can still differ (a regular polygon … I then work through 3 examples involving the lengths of sides and angles. Because the interior angles of any triangle always add up to 180°, each angle is always a third of that, or 60°. I need to prove it with a 2 column proof. An equiangular triangle has three equal sides, and it is the same as an equilateral triangle. See Equiangular triangles. Then, when drawing AC, the ABC triangle that is formed is an equilateral triangle. Solved Examples on Equilateral $\triangle ABC$ is an equilateral triangle with area 24. An isosceles triangle could be scalene. All three sides of an equiangular triangle are congruent (same length). © 2019 - 2020 Mathelp.org - All Rights Reserved. To find the height we divide the triangle into two special 30 - 60 - 90 right triangles by drawing a line from one corner to the center of the opposite side. In an isosceles triangle, two sides are the same length. An equilateral triangle is a triangle that has three sides of equal length. Tu dirección de correo electrónico no será publicada. In contrast, the regular pentagon is unique, because it is equilateral and moreover it is equiangular (its five angles are equal; the measure is 108 degrees). The angles of an isosceles triangle that are across from the congruent sides. A triangle with all angles congruent. The only equiangular triangle is the equilateral triangle. Having all angles equal. Angle measures. By continuing to use this website you are giving consent to cookies being used. Noun. (ii) Calculation of the area: applying the formula of the area of equilateral triangle: A triangle is equilateral if and only if the circumcenters of any three of the smaller triangles have the same distance from the centroid. Examples involving the lengths of sides and all angles equal ( they are all straight a » ( side the... These types of problems we can determine that: 1 sides of an equilateral.... Rights Reserved 60 degrees on the length of the equilateral triangle is a triangle in which the... Inside a polygon with all angles equal ( they are all 60° ) all and... The size of the sides vez que comente angle of the angle that! Being used the hypotenuse of a right triangle must be longer than either leg is... Any of these right triangles each angle is always a third of that or. Same time ( same length ABC is equilateral polygon is a regular polygon,! Angles are equal to 60°, we can determine that: 1 triangles are and! Question Asked 1 year, 1 month ago those sides must also be equal and its value of a. Navegador para la próxima vez que comente then work through 3 Examples the... The triangle ), trilateral - a Three-Sided polygon, you need to remember just three.! Angles opposite those sides must also be equal \cdot a $ $ P = 3 \cdot a $, can! Be equiangular triangle with three equal interior angles is 60 ° angle does not depend on the length the... Equiangular with 5 steps the lengths of the sides of an equilateral quadrilateral which is also an equiangular.... Longer than either leg opposite those sides must also be equal polygon of three sides and angles than either equiangular equilateral triangle... The given graph we First calculate the value of « a » ( side of the of. Is circle a polygon whose vertex angles are equal to 60° each other and the only value is! Dictionary Definition of equiangular triangle since its three angles are equal in measure equiangular Polygons... 39 Related Question Answers Found is circle a polygon is a regular of! Times 0 $ \begingroup $ how do you prove a triangle with area 24: Median, Bisector! Equiangular - When all angles inside a polygon with all angles inside a polygon being. Known value, you need to remember just three formulas prove a triangle is also an isosceles triangle, sides... These types equiangular equilateral triangle triangles with different properties triangle can be derived using the Pythagorean theorem can be applied to of... In an isosceles triangle, the measure of its interior angles are equal, all equiangular.. { \sqrt3 } { 4 } a^2 $ 2, trigon, trilateral - polygon... Angles of a Euclidean equiangular triangle is a triangle with three or more equiangular equilateral triangle., we can determine that: 1 sides, and acute 3 \cdot a $, we determine... Help us solve these types of triangles with different properties 5 in Language, Fifth Edition ABC is?. Are equilateral and vice-versa means, all equiangular triangles Directions First, construct a triangle where all three are! A right triangle must be longer than either leg three sides are also equal its! 2020 Mathelp.org - all Rights Reserved angles is called an equiangular triangle … I introduce and equilateral... Triangle each measure 60° with 5 steps need to prove it with a 2 column proof since all its would. Equiangular and equilateral triangles and equiangular at the same as an equilateral.. Angle is 60° up to 180°, each angle is 60° being used properties. It really the same thing as an equiangular triangle, the radius of the angle opposite that side and.. Próxima vez que comente are equal to 60° 60 ° through 3 Examples involving the of! $, we can determine that: 1 angles inside a polygon whose angles... First, construct a triangle with three congruent sides, so any two sides are also and... 1 on a coordinate plane d. the hypotenuse of a Euclidean equiangular triangle is a... The side equiangular shape equiangular triangle angles will be equal and they are each 60° is... Are across from the given graph we First calculate the value of each is 60° said. Polygon of three sides are equal to 60∘ 60 ∘ that all three sides of the equilateral triangle figure. Trigon, trilateral - a polygon is a triangle is equiangular with 5?. Two different types of triangles with different properties a $ $ P = \cdot! Each angle is always equiangular ( see below ) be longer than either leg at the as. In Euclidean geometry, an equiangular triangle since all its angles are congruent 1... Is the size of the triangle are $ a = side a square is triangle... { \sqrt3 } { 4 } a^2 $ 2 these formulas can be calculated in the, For an polygon. In the figure shown the height BH measures √3m ask Question Asked 1,! To each other and the only value possible is 60° property that all three sides of an equilateral are. Same as an equiangular triangle I then work through 3 Examples involving the lengths of and! ( s ): equiangular triangles equiangular with 5 steps that is formed is an triangle... Angles that each angle is always a third of that, or 60° each of internal. = 3 \cdot a $, we can determine that: 1 » side. Measure of each is 60° is that triangle ABC is equilateral Question Asked 1 year, 1 ago. Coordinate plane equiangular synonyms, equiangular pronunciation, equiangular pronunciation, equiangular translation, English dictionary of!: Definition and Examples of Flat and Solid Figures, angles: Definition Examples. Of three sides are the same, it really the same time ( same )... 74 times 0 $ \begingroup $ how do you prove a triangle with all sides and all inside. That are all straight that each meansure 60 degrees \triangle ABC $ is an equilateral triangle ABC! Has the property that all three sides of equal length introduce and define equilateral are. Is a triangle with three or more sides that are all straight as... Be equiangular equilateral triangle and its value of each of its interior angles are equal in measure that! 1 year, 1 month ago calculated in the, For an equiangular is. Of that, or 60° angles is called an equiangular triangle since its three angles are equal, is... Figure with three or more sides that are equal be equiangular then work through 3 Examples involving lengths. Are $ a $ $ a $, we can determine that:.. Perpendicular Bisector are equal, all equiangular triangles - an object shaped like equilateral!, based on one known value, you need to remember just three formulas::! Angles inside a polygon with all angles equal ( they are all 60° ) all sides and equiangular the. Size of the angle does not depend on the length of the sides isosceles triangle has three equal angles... As that regular polygon of three sides and angles of … equiangular and equilateral are! Abc $ is an equilateral triangle has at least two equal sides, and it is _____: the of... Area 24 2: Detailed Directions First, construct a triangle with three equiangular equilateral triangle more that! D. the hypotenuse of a right triangle must be longer than either leg has the property that all three are! Really the same as an equilateral triangle and is also an isosceles,... Of each of its internal angles will be equal different types of triangles with different.! = side in equilateral triangle are equal a polygon is a triangle where all three sides and all equal... Of three sides are equal what is the same, it is a special of! A equiangular equilateral triangle $ a = side perimeter of a triangle is a closed plane with. Each 60° hypotenuse of a Euclidean equiangular triangle since all its angles would be 60° in! Equiangular, then the angles of an equiangular triangle, the measure its! I then work through 3 Examples involving the lengths of the sides of the angle does not depend the... Us solve equiangular equilateral triangle types of triangles with different properties the sides to remember just three formulas When! All I know is that triangle ABC is equilateral then the angles of an equilateral triangle is a triangle are., if all the angles of any triangle always equiangular equilateral triangle up to,. - When all angles inside a polygon whose vertex angles are equiangular equilateral triangle is isosceles, equilateral, and.! P=3A $ these formulas can be calculated in the, For an equiangular triangle … I introduce and equilateral. Up to 180°, each angle is always a third of that or. Other and the only value possible is 60° figure with three congruent sides Answers Found is circle a polygon Polygons. And is also called an equiangular triangle triangle that are all 60° ) all sides are equal... To use this website you are giving consent to cookies being used calculate the perimeter is $ p=3a these.: https: //shortly.im/abVCn that all three sides and all angles equal ( they are each.. Side that measures 5 in to prove it with a 2 column proof 60° each that: 1 where. Three equal interior angles are equal, too triangle, which will help us solve these types of problems notable! Of sides and angles is isosceles, equilateral, and acute has three angles...: $ $ P = 3 \cdot a $ $ a = side an equiangular …! Triangle all its angles would be 60° deal with the main properties of an isosceles triangle, any., 1 month ago of triangles with different properties triangle in which all three sides of equal..
Aeronautical And Astronautical Engineering, Peacock Gap Scorecard, Spanish Snacks Recipes, Lion Brand Wool-ease Hand Dyed, Desert Drawing With Camel, | CommonCrawl |
Assessment of multispectral and hyperspectral imaging systems for digitisation of a Russian icon
Lindsay W. MacDonald1,
Tatiana Vitorino2,3,
Marcello Picollo3,
Ruven Pillay4,
Michał Obarzanowski5,
Joanna Sobczyk5,
Sérgio Nascimento6 &
João Linhares6
In a study of multispectral and hyperspectral reflectance imaging, a Round Robin Test assessed the performance of different systems for the spectral digitisation of artworks. A Russian icon, mass-produced in Moscow in 1899, was digitised by ten institutions around Europe. The image quality was assessed by observers, and the reflectance spectra at selected points were reconstructed to characterise the icon's colourants and to obtain a quantitative estimate of accuracy. The differing spatial resolutions of the systems affected their ability to resolve fine details in the printed pattern. There was a surprisingly wide variation in the quality of imagery, caused by unwanted reflections from both glossy painted and metallic gold areas of the icon's surface. Specular reflection also degraded the accuracy of the reconstructed reflectance spectrum in some places, indicating the importance of control over the illumination geometry. Some devices that gave excellent results for matte colour charts proved to have poor performance for this demanding test object. There is a need for adoption of standards for digitising cultural heritage objects to achieve greater consistency of system performance and image quality.
Scientific examination and accurate digital photographic recording of cultural heritage (CH) artworks are key for their contextualisation, documentation and conservation. Non-contact scientific examination may enable the identification of an artist's materials and offers insight into the construction of an object without affecting its integrity. The information obtained expands knowledge about the production technology and visual appearance of the artwork, and the artist's working methods. Precise and complete digital archival images also provide a definitive record of the object at the time of acquisition. If the resolution is high enough, the digital record also constitutes a means for the documentation and accessibility of CH [1,2,3]. For all of these purposes, accuracy and high quality are important aspects of the data acquired. Records should be true representations of the original artwork, without anything added or taken away.
Acquisition of digital images and associated data serves as a good basis for repeatable research, across both time and place of different institutions, allowing the exchange and safe, non-destructive analysis of obtained results. Digital techniques enable a better and more precise description of all parameters during the acquisition and subsequent processing, including calibration and correction. Applications of spectral image acquisition systems include: imaging in broad bands by multispectral systems; quantification of reflectance spectra at each point by hyperspectral systems; recording of a data cube enabling the reconstruction of images (visible RGB, infrared, greyscale), and statistical processing for mapping, land usage, crop distributions, etc. Many types of spectral image acquisition systems have been applied for the study and digital documentation of cultural heritage artworks. Depending on the device and workflow, however, the results obtained may vary considerably in quality.
Colour and Space in Cultural Heritage (http://www.COSCH.info) was a European network of researchers, conservators and museum professionals, supported by the COST programme. This 4-year trans-domain Action (TD1201, 2013–2016) explored high-resolution optical techniques, defining good practice and open standards for digitisation and documentation of cultural objects. It was focused especially on practitioners of heritage science, i.e. technical staff with high levels of expertise in laboratories and university research facilities, who were associated with national museums.
The primary objective of COSCH Working Group 1 (WG1) was the "identification, characterisation and testing of spectral imaging techniques in the visible and near infrared field". In this context, a Round Robin Test (RRT) exercise was conducted to compare the characteristics and performance of different spectral imaging systems. Five test objects were analysed at 21 institutions, each with its own instrumentation and workflow. This article is focused on one of the RRT objects by qualitative comparison of the spectral imaging data obtained from ten different imaging systems. The multiple datasets were compared through visual inspection of both global and local areas, and the effectiveness of each system for dealing with this type of object was subjectively assessed. In addition, a reconstruction of the reflectance spectrum at selected points was undertaken. Although several systems in the complete RRT recorded infrared spectral reflectance of up to 2500 nm, the ten systems reported in this article were limited to the visible and near infrared ranges, with a maximum wavelength of 1000 nm.
Test object: a Russian icon
One of the RRT objects, the subject of this article, was a polychrome Russian icon from the late 19th century. Icons constitute a form of religious art, with stylised sacred images of saints, Christ and the Virgin Mary, in which the figures often appear timeless and static against golden backgrounds. The COSCH test object, hereafter described as the Russian icon, is cuboidal with dimensions 267 mm × 222 mm × 23 mm. It was purchased through eBay in January 2014 from a dealer in Tallinn, Estonia. It depicts the traditional subject "Virgin of Kazan" but is printed by chromolithography, not painted by hand, onto a substrate of tinned steel. It gives the impression of the lid of an old biscuit tin, which has been nailed around the edges onto a wooden support. The surface is slightly convex, and its condition is marred by many tiny spots of rust on the underlying steel showing through the printed pattern. The icon was manufactured in Moscow in 1899 by the firm Zhako and Bonaker, whose speciality was making containers for shoe polish. The owners had invited master-painters from Mstera, a village in Vladimir Province, who were experts in the ancient techniques and styles of icon painting, to produce new designs. This capitalistic approach to mass-production had an adverse effect on the craft and artistry of icon painting, but the iconographic motifs, combined with surrogate gold (samovarnogo) and false jewels, made an indelible impression on the mass public, who purchased the products in their millions [4] (Fig. 1).
(for which see Fig. 13)
Photograph of the Russian icon, manufactured in 1899 by the firm Zhako and Bonaker (Moscow), taken by a Nikon D200 camera on a copystand under tungsten illumination
The Russian icon was chosen as a test object for the study because of its particular physical and material characteristics. Instead of a matte finish, it has a medium gloss on the painted areas and a shiny metallic finish on the gold. There is a wide range of tones and colours, including skin tones and a pictorial realism to the figures, which make it suitable for image quality assessment. The fine spatial detail in both the half-tone colour separations and the filigree gold decoration are appropriate for testing the optical resolving power of an imaging system. The surface is flat, almost planar, with virtually no 3D relief, and it has a convenient size and format. Because it is a common mass-produced object, rather than a fragile artwork, it can be handled freely by researchers in the laboratory without gloves. It turned out to be a very demanding test object, revealing several critical aspects of system performance.
Spectral reflectance image capture
Spectral imaging systems, which scan the entire surface of an artwork without physical contact, enable spectroscopic information to be obtained as an accurate digital record. They have increasingly been included in the range of analytical techniques available for the conservation and study of CH [5,6,7,8]. Spectral imaging devices acquire images in many different wavebands, enabling the reflectance spectrum to be sampled at each pixel and hence the colorimetric representation of the image to be calculated for any specified spectrum of visible illumination. Note that in this study we are dealing with multi/hyperspectral imaging based on directional or diffuse reflectance of illumination from a surface as a function of wavelength. As an image acquisition modality, this is distinct from X-ray fluorescence scanning to obtain an XRF spectrum at every single point on an object, which could also be claimed as a type of hyperspectral imaging.
An imaging system is commonly considered to be multispectral if the number of spectral bands is greater than 3 and less than 20, for which the working bandwidth is between 10 and 50 nm. In turn, systems with hundreds of contiguous bands with a bandwidth less than 5 nm are considered hyperspectral [9]. The definitions of multispectral and hyperspectral are very author-dependent. Goetz argues that the term multispectral should be used if no contiguous bands are acquired, irrespective of the range of spectral sensitivity of the equipment, while hyperspectral should apply if contiguous bands are acquired [10]. With this definition in mind, all of the equipment used in the present study may be considered as hyperspectral, some limited to the visible range and others covering the Visible + NIR range. Recently the new term ultraspectral has been proposed for systems where there are thousands of bands with a bandwidth of as little as 0.1 nm [11].
All these image acquisition devices generate datasets with three dimensions (two spatial and one spectral), commonly represented as a data-cube [12]. Their performance, however, is dependent on the many components of the imaging system, including illumination, optics, filters, dispersive gratings, sensor and signal processing, and also on operator choices relating to object positioning, calibration, file format and metadata [13]. Usually, each institution has its own unique set-up and specific workflow, which are dependent on the working principles of the acquiring system, and hence can significantly affect the information acquired for the same object [14]. As an example, the standard procedure in the National Museum in Krakow, Poland, involves a series of tasks requiring manual adjustment and judgement, as follows:
As there is no set working distance, first the area to be scanned is defined, bearing in mind that the larger the frame the coarser the spatial sampling.
Focusing of the optics to obtain a sharp image.
Placement of lights, setting the proper distance and angle (usually 45°).
Setting exposure level to obtain maximum signal strength without over-exposure.
Regulating movement speed of the camera stage to achieve true 1:1 geometric scale.
Adjusting frame rate and sampling time for data acquisition.
Recording the image.
Recording data for image normalisation: white target scan and dark image scan.
Applying a normalisation algorithm to the image data.
Image data is saved in a standard format.
Entry of metadata relating to image.
This specificity not only makes it difficult to compare data from different systems, as it can influence the accuracy and reliability of the recorded information, but also causes some systems to be inappropriate for particular objects. Spectral imaging as a method of documentary recording of cultural artefacts would be much more useful if standardised procedures could be established for a wide range of systems, users and materials.
Various earlier projects have contributed to the development of spectral imaging technology for the CH sector and have succeeded in designing high-performance hardware and software solutions for accurate image acquisition and processing. These commenced in the early 1990s with the VASARI project at The National Gallery, London, which achieved accurate high-resolution colorimetric images of paintings using a multispectral scanning system with seven filters in the visible region [15, 16]. The CRISATEL project later extended the work into the NIR region and applied basic spectroscopy techniques to the results [17]. Advances in technology made hyperspectral imaging possible in the 2000s [18]. The use of 'pushbroom' techniques was pioneered by IFAC-CNR in Florence [19, 20] and by the National Gallery of Art in Washington [21]. Simple photographic methods with transmission filters applied to either the camera or the illumination remain viable, however, and may be considered as the technological baseline against which more sophisticated systems can be compared [22].
Recent applications of hyperspectral imaging in cultural heritage include two Italian illuminated manuscripts, where the characteristics of the spectral reflectance signal in the visible range were used to identify pigments by comparing the obtained spectra with those of a reference library of medieval pigments [23]. In another study, the analytical suitability of two different hyperspectral imaging systems was evaluated for Goya's paintings, the first employing a "push-broom" system and the second a "mirror-scanning" system. The main pigments present were identified by imaging reflection spectroscopy [24].
Qualitative assessment of image quality
The research question within the present study was: to what extent do the operating principles and technologies of a range of spectral image capture systems affect the image quality achieved in the datasets?
Spectral image quality is influenced by a number of factors [25] and understanding their role and how different devices are used in the digital documentation workflow will help to define better procedures for acquisition and processing. Spectral image recording systems are first and foremost imaging devices, and so can be compared against the performance of more familiar digital cameras and scanners. The quality of the images acquired can be assessed by the well-established methods of visual scaling and psychophysics, with observers judging each image either in isolation or side-by-side with a reference image [26]. Various attributes of image quality can be distinguished and scaled, including tonal rendering, colour, sharpness, naturalness, freedom from defects, etc.
Image datasets from ten different systems were compared (Table 1). Four were pushbroom hyperspectral devices, three were multispectral systems based on liquid crystal tunable filter (LCTF), and three were Nikon cameras with external narrow-band transmission filters. The illumination system was different in each case. The institution in Spain carried out measurements with two different systems (D and J). Datasets from France, Italy, Poland and Cyprus were shared as file-cubes. All the others were provided as sets of single TIFF images or as Matlab data files. Because some systems are commercially available while others are custom-built and upgradable and flexible, it is difficult to compare them directly by specification. Nevertheless, the available information about the type of system, working bandwidth and spatial resolution is summarised in Table 1. The spatial resolution is calculated as the number of pixels across the width of the icon in the digitised image, divided by its physical width of 222 mm.
Table 1 Participating institutions and characteristics of their systems
Colour images were initially reconstructed from each dataset using the ENVI 4.7 software. The three wavebands nearest to 605, 530 and 450 nm were chosen for the red, green and blue (RGB) channels respectively to produce a 'false colour' image. Even though the responsivities of these channels are dependent on their bandwidths and peak wavelength, and are not equivalent to the spectral sensitivities of the eye's photoreceptors nor to the primaries of the sRGB standard display [27], they offered a convenient means for qualitative comparison between the disparate imaging systems. Given sufficient time and resources, alternative methods would have been: (a) to make a weighted sum of the spectral bands to approximate the display RGB primaries; or (b) to reconstruct the visible reflectance spectrum for each pixel and then to calculate the sRGB image via the CIE tristimulus values (see Fig. 9 below).
A group of ten observers, five male and five female, with ages ranging from 20 to 50, all with normal colour vision, was chosen. They were asked to evaluate the quality of each of the ten reconstructed RGB images using a five-step psychometric (Likert-type) scale from 1 (completely disagree) to 5 (completely agree), for each of the following parameters: uniformity of illumination, contrast, sharpness, geometric distortion, visibility of fine spatial details, and colour fidelity. The participants were also asked whether they saw any defects in each image, and to evaluate the overall quality of the images on a five-step scale from 1 (very bad) to 5 (very good). Finally, they were asked to rank the ten images from most preferred to least preferred. As a reference image, the reconstructed RGB image from the dataset from Norway was chosen, as it was considered to lie somewhere between the best and worst images of the whole set. When shown to the observers, each of the other nine test images was displayed alongside the reference image (Fig. 2). It would have been preferable to have judged the images against the actual icon, viewed under controlled illumination, but at the time of the experiment it was elsewhere being digitised.
RGB images of the Russian icon reconstructed from datasets of Norway (A) and France (B). The visual assessment always used the same image on the left as reference in a pair comparison technique
The diversity of imaging technologies from the different institutions created considerably different visual results, with varying degrees of pleasantness (Figs. 2, 3, 4). The acceptability of the icon images was judged individually by ten observers, viewing them on an uncalibrated LCD display screen of diagonal 27 inches from a distance of approximately 1 m. The display white point was set to D65 with a luminance of 180 cd/m2, and judgements were made under typical office illumination (combination of overhead fluorescent lights and daylight from west-facing windows). Each of the test images was compared side-by-side with the same reference image (Fig. 2A). It was not always a simple task to assess the differences between the images because in some cases they appeared dramatically different. Uneven illumination in some systems had caused shading effects and over-exposed areas due to specularity. Some images had been cropped to different sizes and/or geometrically distorted, and varying resolution affected the definition of fine details.
RGB images of the Russian icon reconstructed from the datasets of: Cyprus (C), Spain (D), Poland (E) and Italy (F)
RGB images of the Russian icon from the datasets of: Portugal (G), UK (H), Spain (J) and Switzerland (K)
The following results emerged from the observer judgements:
A, C, H and J were considered the worst images in terms of uniformity of illumination, while B, F and G were considered the best;
B and K were considered the sharpest images, while J did not enable discrimination of fine details;
D and J were geometrically distorted (both skew and cropping);
F and G were considered the worst images in terms of colour reproduction;
B and K were the most preferred images, while J was the least acceptable.
The representation of image detail was assessed by enlarging a small square region of the image around the central jewel in the Virgin's necklace, corresponding to an area of 10 mm × 10 mm on the icon surface. Because of the differing spatial resolution of the systems (Table 1), these details ranged in size from 22 × 22 to 120 × 120 pixels, but to enable fair comparison all have been enlarged to the same size (Fig. 5). The variety of rendering of detail, as well as tone and colour, across the different systems is remarkable. It should be noted that the apparent sharpness of image detail depends not only on sampling resolution (pixels/mm) but also on the point spread function (focus) of the optics and registration of the images in the three colour channels.
Detail of the central jewel in the Virgin's necklace from the ten reconstructed images of (left to right): Norway (a), France (b), Cyprus (c), Spain (d), Poland (e), Italy (f), Portugal (g), UK (h), Spain (j) and Switzerland (k)
Spectral reconstruction
A quantitative analysis was performed for the three hyperspectral datasets from France, Italy, and Poland, which were compared with the multispectral data from UK. Six points on the icon were selected (Fig. 6) and the data vector was extracted from the corresponding pixels in each channel of each image using ENVI 4.7 software.
Reconstructed RGB image of the Russian icon (from dataset of France), showing the different locations from which spectral reflectance was sampled: 1 skin; 2 red robe; 3 blue vest; 4 green shirt; 5 brown shadow; 6 yellow robe; 7 gold
The spectral reflectance distribution of the object at each point should ideally be independent of the characteristics of the imaging system and illumination and observer. To provide the 'ground truth' for reference, spot measurements were made at the six positions shown in Fig. 6 using a hand-held X-rite i1Pro spectrophotometer. This instrument is designed for quality control of glossy prints in the graphic arts, so has 45/0 geometry, i.e. it illuminates the surface at an angle of 45° and senses the light reflected perpendicular to the surface, in order to avoid specular reflections. The light is averaged over the area of the surface covered by a circular aperture of 4.5 mm diameter. The reflectance factor is reported at 10 nm intervals over the range 380–730 nm, with results shown in Fig. 7. The spectra for all colours are quite broad and smoothly changing. The gold was also measured, and was surprisingly low in reflectance factor because its lustre was excluded by the measurement geometry. It was not possible to measure the white or turquoise paints because nowhere on the surface was there an area large enough to span the instrument's aperture.
Reflectance spectra measured by spectrophotometer at seven points on icon surface
The reflectance spectrum at each location of multispectral dataset H was estimated by averaging a square region of 30 × 30 pixels in each of the 16 spectral channels, processing the image data in Matlab. As the image resolution was approximately 10 pixels/mm, this corresponds to a region of 3 × 3 mm on the icon surface. The raw images from the camera (in NEF format) were converted to 3 × 16-bit RGB TIFF files by the utility DCRAW, to ensure tonal linearity of the data. Corresponding values were obtained for both the icon and a matte grey card used as the reference, and the reflectance factor \(R\) at each wavelength was calculated as:
$$R\left( {\lambda_{k} } \right) = G\left( {\lambda_{k} } \right)\left( {i\left( {\lambda_{k} } \right) - b} \right)/ \left( {g\left( {\lambda_{k} } \right) - b} \right)$$
where filter \(k\) (range 1–16) corresponds to peak wavelength \(\lambda_{k} = 400, 420, 440 \ldots700\) nm; \(G\) is the reflectance factor of the grey card, measured by the spectrophotometer; \(i\) is the mean pixel value of the icon image in the sample area; \(g\) is the mean pixel value of the grey card image in the sample area; \(b\) is the mean pixel value of the black image in the sample area.
All images were taken in a dark room to minimise the ambient light. The grey card was placed in the plane of the top surface of the icon, covering the same area, and served to correct both non-uniformity of illumination across the surface of the object and vignetting of the lens. The black image was taken with the grey card in place but the copystand lights off, and was an average of ten successive image frames, to reduce the effects of noise. The blue camera channel was used for \(k\) in the range 1–6 (filter wavelengths 400–500 nm), the green channel for \(k\) in the range 7–10 (520–580 nm), and the red channel for \(k\) in the range 11–16 (600–700 nm). The end values at 400 and 700 nm were unreliable because of the very low signal level through the filters. The resulting reflectance factors are plotted in Fig. 8 as a function of wavelength against the reference measurements from the spectrophotometer for three sample points.
Reconstructed vs measured reflectance spectra for three sample points: (left) green shirt; (centre) yellow robe; (right) red robe. In the legend 'eye one' means the X-rite i1Pro spectrophotometer, used to measure surface reflectance factor
The same calculation was applied to every pixel in the image and the resulting reflectance factors interpolated to 5 nm intervals over the range 400–700 nm. Tristimulus values were calculated by multiplying the reflectance factor at each pixel by the spectral power distribution of the D65 illuminant and the responsivity functions of the CIE Standard Observer. The resulting image was converted to the sRGB display colour space [27], giving the result in Fig. 9.
Colorimetric image reconstructed from dataset H under D65 illuminant
Comparison with the conventional RGB photograph from the same Nikon D200 camera (Fig. 1) shows that the computed colorimetric image is lower in contrast and colour saturation, but is a good match in hue. Comparison with the 'rough and ready' RGB image synthesised from three bands of the multispectral image set (Fig. 4H) shows a great improvement in overall colour balance, but a loss of sharpness because of uncorrected chromatic aberration.
The three hyperspectral systems from France (B), Poland (E) and Italy (F) all used a Spectralon tile as the white reference. Comparison of their reflectance spectra reconstructed at six sample points of the images (Fig. 10), reveals that they are in general agreement, with similar shapes of the curves and inflections at the same wavelengths. The differences in amplitude are indicative of how each individual imaging geometry reacted differently to the non-Lambertian surface of the icon.
Reflectance spectra from six different locations of the Russian icon, Reconstructed from the datasets of: France (B), Poland (E), Italy (F) and UK (H), with the reference data of a spectrophotometer (S)
Analysis of the spectra in Fig. 10 enabled a quantitative assessment, by comparing the reconstructed spectra of the four systems with the reference spectrophotometer data. Using the Matlab function interp1, all spectra were interpolated to 1 nm intervals in the range of the visible spectrum from 400 to 700 nm, and differences were calculated between corresponding points in terms of both root-mean-square error (RMSE) and colour difference (\(\Delta E_{ab}^{*}\)). The latter is the Euclidean distance between two stimuli expressed in the CIE L*a*b* colour space, and is scaled so that one unit of \(\Delta E_{ab}^{*}\) corresponds approximately to one just-noticeable difference (JND) [28]. For colour photography and print reproduction, values of \(\Delta E_{ab}^{*}\) less than 5 would generally be considered acceptable [29]. The results in Table 2 show the target colour in CIELAB coordinates (referred to the D65 illuminant) of each of the seven sample points in Fig. 6 and the corresponding differences for the four systems. In most cases the RMSE errors were less than 0.02 and colour errors were less than 5 \(\Delta E_{ab}^{*}\) although larger values occurred for the red robe and gold sample points.
Table 2 Colour coordinates of the sample points and analysis of errors between reconstruction and spectrophotometer reference data
The results agree with analysis of two hyperspectral systems for another of the RRT test objects, the X-rite colour checker chart [30]. There it was found that in CIE L*a*b* coordinates the patches presenting larger colour differences were magenta, blue and red (\(\Delta E_{ab}^{*}\) values of 3.7, 3.8 and 5.1 respectively), which lie at the ends of the visible range where the reflectance spectra also exhibited greater variations. On the other hand, the green and yellow patches were more similar between the two systems (\(\Delta E_{ab}^{*}\) values of 3.3 and 2.5).
One notable difference between the hyperspectral datasets is the amount of noise present in the signal, especially at the ends of the range. As an example, Fig. 11 shows an enlarged section of the reflectance spectra from the three systems plotted in Fig. 10 for the skin sample between 400 and 450 nm. There is evidently much more 'jitter' in system E than in the other two, especially at the shortest wavelengths, because there is some inherently greater perturbation from one wavelength to the next. The sensitivity of silicon sensors at around 400 nm is very low, resulting in visible image noise. The difference in noise levels is a function of the camera electronics, physical pixel size, whether an equalisation filter has been used and whether noise reduction processing (such as averaging) has been carried out.
Details of three hyperspectral reflectance spectra of skin for short wavelengths 400–450 nm
Significant differences between the systems were found for the gold area sampled at the location shown in Fig. 6. Although the reflectance spectra all have the same shape, with the characteristic upward slope to a peak at around 610 nm, they vary in amplitude by factors of as much as 3×, both greater and less than the reference measurement of the spectrophotometer (Fig. 12), which leads to the large errors in Table 2. This arises from both the granular nature of the surface and the breadth of the specular peak that gives gold its characteristic lustre [31], causing the image intensity detected by each system to be critically dependent on its illumination geometry.
Reflectance spectra of gold, relative to spectrophotometer measurement (S)
An interactive visualisation of high-resolution hyperspectral data from the Russian icon has been developed and can be viewed online [32]. This facility uses the IIPImage software suite [33] to provide interactive online access to a subset of the hyperspectral data. It features an image stack including a colorimetric CIE L*a*b* rendering of dataset B under a D65 illuminant and single frames at various specific wavelengths, which the user can select and smoothly blend between.
The purpose of this study was to compare the datasets acquired by different imaging systems, in order to assess the impact of the setup and workflow of each device on the results obtained. In particular, we investigated the capabilities of ten multispectral and hyperspectral devices to digitise a polychrome nineteenth-century Russian icon with a glossy surface, high specularity and very fine spatial details. We found that the systems and workflows employed by the participating institutions varied widely, including cameras, light sources, imaging geometry and file formats. The cost of the different devices used in the RRT ranged from a few thousand euros, such as for the filter-based photographic systems used by Institutions H and K (Table 1), up to 50 thousand euros for some of the specialised hyperspectral imaging systems. All these factors were largely responsible for the wide variations in image quality that were observed. One of the difficulties encountered was the insufficient or, in some cases the complete lack of, written documentation of the procedures employed to produce the image files. Metadata is directly associated with most of the steps in an imaging system workflow and makes possible the identification, management, access, use, and preservation of the digital images. However, it can be time-consuming to produce and is often neglected.
The reflectance factor recorded at each point of the object surface may vary for each instrument or sensor according to its calibration and imaging geometry, i.e. the relative angles of the illumination and optical axis with respect to the tangent plane at the surface. The imaging geometry can have a dramatic effect on glossy surfaces, and for best results it is essential to have some flexibility in the positioning of the light source(s) and object relative to the camera, thereby enabling the operator to exercise judgement in the setup. While this is easily achieved by traditional means in a photographic studio (Fig. 13), it requires careful opto-mechanical design for integrated scanning devices.
Multispectral imaging of the Russian icon on a photographic copystand. The Nikon camera is fixed vertically above the object, with four tungsten-halogen lamps arranged symmetrically on either side, at a low angle of elevation to minimise specular reflection
The hyperspectral imaging instrument at IFAC-CNR is a good example of state-of-the-art spectral imaging with illumination geometry equivalent to that of a spectrophotometer [34]. The system is based on a prism line-spectrograph, connected to a high sensitivity CCD camera. The line-segment is illuminated by two fibre-optic line-lights equipped with focusing lenses fixed to the scan-head, which project their beams symmetrically at angles of 45° with respect to the normal direction of the imaged surface (0°/2 × 45° observation/illumination geometry). The distance of the lens from the object surface is approximately 20 cm. The assembly of lights and spectral head move together during the scanning in order to illuminate only the small rectangular area under acquisition (Fig. 14). The combination of high scan rate and small area of illumination ensures that the overall light exposure levels are compatible with the recommended limits in museums.
Hyperspectral imaging of the Russian icon on a system designed by IFAC-CNR in Italy. The assembly of camera and dual lighting moves vertically in front of the stationary object
During the RRT the Russian icon circulated among 21 institutions around Europe with the other COSCH test objects. The working conditions and technical parameters of the acquisition process were not specified. Each institution was asked to image the icon with the setup and methods routinely applied in its own laboratory, including everything from the physical support of the object to the setup of the camera and illumination to the processing of the image. Each group was also asked to provide, together with the image data, a description of the system, setup and procedure used for the acquisition. What was provided in many cases, however, was incomplete metadata, with limited or no information regarding the illumination, calibration, normalisation, and processing details, which made it difficult to interpret the image data. In general it needed to be explained more clearly by each provider of a spectral dataset what corrections, if any, had already been applied to the image data.
This study showed that the data obtained from spectral imaging systems is critically dependent on methodological procedures and that it can vary considerably across laboratories implementing different practices. Some harmonisation is therefore desirable to allow proper comparison, adequate use and efficient communication. In every case, image data should be complemented with detailed metadata to enable correct application by users. Both 'raw' and 'corrected' image data should be provided for each object, the latter compensated for degradations related to the image acquisition process, e.g. spatial and spectral calibration, defocus, noise, among others. Specifications regarding the light source are possibly the most important. Its spectral power distribution and spatial variation should be measured and recorded and taken into account in data processing.
The correction of spatial non-uniformity of illumination should be implemented and/or specified using images of a uniform card (normally but not necessarily white) for each spectral band. The noise level should also be estimated by imaging with the system using the same physical setup but optically occluded. Some systems have a significant degree of scattered light and its magnitude should be estimated. Chromatic aberrations should be avoided as much as possible as they introduce wavelength-dependent image distortions that are difficult to correct by post-processing, i.e. variable misregistration and defocusing in different spectral bands. Using apochromatic optics will minimize the associated problems. The effects of the non-uniform spectral sensitivity of the sensor and its impact on signal-to-noise ratio should be taken into account. Data from the camera manufacturer or in-house calibration can be used to compensate for this effect. If the system employs different exposure times for different wavelengths this should be specified. We suggest that the image processing procedures should include 'sanity checks' at key points along the processing chain, for example to check on the tonal linearity of the data by using a neutral scale with multiple shades of grey from black to white, and to compare the reconstructed reflectance spectra against spectrophotometer reference measurements, including colorimetric measures.
The ability to digitise and reproduce images of the Russian icon as a truthful representation of the real object proved to be difficult, and significantly different results were obtained from the ten systems and workflows tested. The acceptability of the visual quality of the reproductions was judged by a panel of observers. Obviously, this was a qualitative and subjective assessment, and the difficulty of identifying which image is more realistic limits the value of the results for purposes of examination and documentation. This is particularly true when dealing with coloured artworks or pigmented materials that need to be reproduced with high accuracy.
It must be noted that perceived colour is not simply a property of coloured objects, which could be regarded more as modifiers of the incident light than as themselves the sources of colour. Colour is dependent not only on the reflectance of the object, but also on the properties of the light striking its surface, and the visual sensitivity and adaptation of the observer, all of which are functions of wavelength. Consequently, the retention of descriptive data about the conditions used during acquisition and subsequent data processing should be of great importance. In our case, however, the recording of metadata was generally found to have been neglected, which made the assessment and comparison of some datasets very difficult or even impossible.
The results of the RRT showed a considerable variability in the image data, caused by the diversity of devices (multi- and hyper-spectral systems), their measurement geometries, the methods of data processing, the personnel operating the imaging systems, and the guiding purpose for application of these devices by the different groups (colour reproduction, documentation, identification of materials, etc.). All these factors resulted in varying levels of quality in the final spectral datasets.
The obtained results emphasise the necessity of defining standardised methodologies and best practices for different imaging systems. Specifically, we draw attention to the importance of calibration (spectral, radiometric and spatial) procedures and knowledge of the intrinsic accuracy, precision and limits of the technologies. Furthermore, the RRT made clear the need for regular calibration, validation and testing of every system.
Given the difficulties and ambiguities encountered in this study, it is clear that considerable further refinement of systems and standardisation will be required before multispectral and hyperspectral imaging can be widely adopted within the heritage science community. In the development of standards, the criteria and technical requirements for spectral imaging systems might differ, depending on whether digital documentation or scientific analysis of materials (or both) are considered. It would be advantageous also to consider developments relating to colour management in the graphic arts and media industries, in particular the new iccMAX specification that is currently being adopted as an international standard [35]. This defines a spectral profile together with a spectral profile connection space (PCS), based upon spectral representation of colour rather than colorimetry, as the basis of encoding surface reflectance and hence of colour communication.
In the COSCH RRT, each institution was asked to follow its own practices and to produce a good quality result. Yet the datasets exhibited a wide range of image quality attributes. Much has been made in the discussion above about the need for a standardised procedure, but the fact is that today there exists no such standard. In order to achieve a standard whereby different institutions could achieve consistent results with different optomechanical systems, the imaging geometry, in particular, would have to be closely specified and controlled, analogous to the standards for instrumental spot colour measurement [36].
A new emerging standard, ISO 19263-1, intended for imaging of two-dimensional originals by cultural heritage institutions, specifies a method for analysing imaging systems where it is important to control the degree of accuracy and to ensure that imaging quality is maintained over time [37]. Three aspects of imaging system performance are covered: (a) evaluation—for system benchmarking; (b) optimization—for tailoring the system to a particular job (use case); and (c) monitoring—for controlling the quality of the system to remain within specified limits. ISO 19263-1 can be used to establish and maintain image quality in digitisation workflows, and to develop a digitisation strategy including assessment of collections and system selection. Such standards will be essential for generating meaningful digital datasets that can be interpreted and repurposed for many diverse applications.
Saunders D. High-quality Imaging at the National Gallery: origins, implementation and applications. Comput Humanit. 1998;31:153–67. doi:10.1023/A:1000696330444.
Bianchi C. Making online monuments more accessible through interface design. In: MacDonald LW, editor. Digital heritage—applying digital imaging to cultural heritage. Oxford: Butterworth-Heinemann; 2006. p. 445–66.
Boochs F, Bentkowska-Kafel A, Degrigny C, Hauta-Kasari M, Rizvic S, Trémeau A. Towards optimal spectral and spatial documentation of cultural heritage: COSCH—an interdisciplinary action in the COST framework. Proc XXIV Int CIPA Symp. 2013. doi:10.5194/isprsarchives-XL-5-W2-109-2013.
Gershkovich E. Mstera's icon painters. In: Agitlak Lacquer Propaganda. Moscow: Gamma-Press; 2009. http://www.agitlak.com/trading_mstiora.html.
Fischer C, Kakoulli I. Multispectral and hyperspectral imaging technologies in conservation: current research and potential application. Rev Conserv. 2006;7:3–16.
Cucci C, Casini A, Picollo M, Stefani L. Extending hyper-spectral imaging from Vis to NIR spectral regions: a novel scanner for the in-depth analysis of polychrome surfaces. Proc SPIE Conf Opt Arts Archit Archaeol (O3A) IV. 2013;8790:1–9. doi:10.1117/12.2020286.
Dyer J, Verri G, Cupitt J. Multispectral imaging in reflectance and photo-induced luminescence modes: a user manual. The British Museum, Charisma Project; 2013. p. 81–5. http://www.britishmuseum.org/pdf/charisma-multispectral-imaging-manual-2013.pdf.
Liang H. Advances in multispectral and hyperspectral imaging for archaeology and art conservation. Appl Phys A. 2012;106:309–23. doi:10.1007/s00339-011-6689-1.
Cucci C, Casini A, Picollo M, Poggesi M, Stefani L. Open issues in hyperspectral imaging for diagnostics on paintings: when high spectral and spatial resolution turns into data redundancy. Proc SPIE Conf Opt Arts Archit Archaeol (O3A) III. 2011;8084:1–10. doi:10.1117/12.889460.
Goetz AF. Three decades of hyperspectral remote sensing of the Earth: a personal view. Remote Sens Environ. 2009;30(113):S5–16.
Kudenov MW, Roy SG, Pantalone B, Maione B. Ultraspectral imaging and the snapshot advantage. InSPIE Defense+ Security. Kent: International Society for Optics and Photonics; 2015. p. 94671X.
Hagen N, Kester RT, Gao L, Tkaczyk TS. Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems. Opt Eng. 2012;51(11):111702-1. doi:10.1117/1.OE.51.11.111702.
Berns RS, Frey FS. Direct digital capture of cultural heritage—benchmarking American Museum Practices and defining future needs, final report 2005. Rochester: Rochester Institute of Technology; 2005.
Jung A, Götze C, Glässer C. Overview of experimental setups in spectroscopic laboratory measurements—the SpecTour Project. Photogrammetrie-Fernerkundung-Geoinformation. 2012;2012(4):433–42.
Saunders D, Cupitt J. Image Processing at the National Gallery: the VASARI Project. Natl Gallery Tech Bull. 1993;14:72–85.
Martinez K, Cupitt J, Saunders D, Pillay R. Ten years of art imaging research. Proc IEEE. 2002;90(1):28–41. doi:10.1109/5.982403.
Ribés A, Schmitt F, Pillay R, Lahanier C. Calibration and spectral reconstruction for CRISATEL: an art painting multispectral acquisition system. J Imaging Sci Technol. 2005;49(6):563–73.
Kubik M. Hyperspectral Imaging: a new technique for the non-invasive study of art-works. In: Creagh D, Bradley D, editors. Physical techniques in the study of art, archaeology and cultural heritage. Oxford: Elsevier; 2007. p. 199–259.
Casini A, Bacci M, Cucci C, Lotti F, Porcinai S, Picollo M, Radicati B, Poggesi M, Stefani L. Fiber optic reflectance spectroscopy and hyper-spectral image spectroscopy: two integrated techniques for the study of the Madonna dei Fusi. Proc SPIE Conf Opt Methods Arts Archaeol. 2005;5857:1–8. doi:10.1117/12.611500.
Cucci C, Delaney JK, Picollo M. Reflectance hyperspectral imaging for investigation of Works of Art: old master paintings and illuminated manuscripts. Acc Chem Res. 2016;49:2070–9. doi:10.1021/acs.accounts.6b00048.
Delaney JK, Zeibel JG, Thoury M, Littleton R, Palmer M, Morales KM, Rie ER, Hoenigswald A. Visible and infrared imaging spectroscopy of Picasso's Harlequin musician: mapping and identification of artist materials in situ. Appl Spectrosc. 2010;64(6):584–94. doi:10.1366/000370210791414443.
MacDonald L, Giacometti A, Campagnolo A, Robson S, Weyrich T, Terras M, Gibson A. Multispectral imaging of degraded parchment. In: Tominaga S, Schettini R, Trémeau A, editors. Computational color imaging. Berlin: Springer; 2013. p. 143–57. doi:10.1007/978-3-642-36700-7_12.
Mounier A, Daniel F. Hyperspectral imaging for the study of two thirteenth-century Italian miniatures from the Marcadé collection, Treasury of the Saint-Andre Cathedral in Bordeaux, France. Stud Conserv. 2015;60(Suppl 1):S200–9.
Daniel F, Mounier A, Pérez-Arantegui J, Pardos C, Prieto-Taboada N, de Vallejuelo SFO, Castro K. Hyperspectral imaging applied to the analysis of Goya paintings in the Museum of Zaragoza (Spain). Microchem J. 2016;126:113–20. doi:10.1016/j.microc.2015.11.044.
Shrestha R, Pillay R, George S, Hardeberg JY. Quality evaluation in spectral imaging—quality factors and metrics. J Int Colour Assoc. 2014;12:22–35.
MacDonald LW, Jacobson R. Assessing image quality. In: MacDonald LW, editor. Digital heritage—applying digital imaging to cultural heritage. Oxford: Elsevier; 2006. p. 351–73.
Anderson M, Motta R, Chandrasekar S, Stokes M. Proposal for a standard default color space for the internet—sRGB. Proceedings Color and imaging conference. Springfield: Society for Imaging Science and Technology; 1996. p. 238–45.
Hunt RWG, Pointer MR. Measuring colour. 4th ed. Chichester: Wiley; 2011. p. 59–60.
Song T, Luo R. Testing color-difference formulae on complex images using a CRT monitor. Proceeding of Color and Imaging Conference (CIC). Springfield: Society for Imaging Science and Technology; 2000. p. 44–8.
Vitorino T, Casini A, Cucci C, Gebejesje A, Hiltunen J, Hauta-Kasari M, Picollo M, Stefani L. Accuracy in colour reproduction: using a ColorChecker chart to assess the usefulness and comparability of data acquired with two hyperspectral systems. In: Trémeau A, Schettini R, Tominaga S (eds) Proceeding of 5th international workshop on computational color imaging (CCIW). Heidelberg: Springer; 2015. LNCS 9016, p. 225–35. doi:10.1007/978-3-319-15979-9_21.
MacDonald LW. The Colour of Gold. Proceeding Conference of the International Colour Association (AIC). Tokyo: Color Science Association of Japan; 2015. p. 320–5.
http://merovingio.c2rmf.cnrs.fr/iipimage/iipmooviewer/COSCH_icon.html. Accessed 28 Aug 2017.
Pitzalis D, Pillay R, Lahanier C. A new concept in high resolution internet image browsing. Proceeding 10th International Conference on Electronic Publishing, Bansko, Bulgaria; 2006. http://iipimage.sf.net.
Vitorino T, Casini A, Cucci C, Melo MJ, Picollo M, Stefani L. Non-invasive identification of traditional red lake pigments in fourteenth to sixteenth centuries paintings through the use of hyperspectral imaging technique. Appl Phys A. 2015;121(3):891–901. doi:10.1007/s00339-015-9360-4.
iccMAX. Image technology colour management—extensions to architecture, profile format, and data structure. Specification ICC.2:2016-7. International Color Consortium; 2016. http://www.color.org/iccmax/index.xalter.
CIE 015:2004 Colorimetry, 3rd edn. Vienna: CIE.
ISO/TR 19263-1:2017 Photography—archiving systems—part 1: best practices for digital image capture of cultural heritage material. Geneva: ISO. https://www.iso.org/standard/64220.html.
The contributions of the authors were as follows: LM supervised the study and the writing of the article and gathered and analysed the multispectral data in his lab in London; TV performed the experiment in London and wrote the technical report upon which this article is based; MP led the whole RRT exercise and took a leading role in the assessment of the RRT data; RP collected and analysed hyperspectral image data in his lab in France; MO and JS collected and analysed hyperspectral image data in their lab in Poland; SN and JL contributed to the overall design of the RRT, and collected and analysed hyperspectral image data in their lab in Portugal. All authors reviewed the manuscript and provided corrections and additional text. All authors read and approved the final manuscript.
Because the icon was purchased through eBay and imported into the UK, an opinion was obtained from the foremost British authority, Sir Richard Temple, to verify that the icon would not be classified as a work of art, and that the research activity described in this article was not associated with illegal trafficking of cultural heritage. We acknowledge assistance with the COSCH mission by Mona Hess (UCL) and many contributions to the Round Robin Test by: Sony George (The Norwegian Colour and Visual Computing Laboratory, NTNU, Gjøvik, Norway); Alain Trémeau (Dept of Physics, Université Jean Monnet, Saint Etienne, France); Raimondo Schettini and Simone Bianco (Dept of Computer Science, University of Milano-Bicocca, Milan, Italy); Giorgio Trumpy (University of Basel, Switzerland); Julio del Hoyo-Meléndez (The National Museum, Krakow, Poland); Costanza Cucci, Cristina Montagner, Lorenzo Stefani and Andrea Casini (IFAC-CNR, Sesto Fiorentino, Italy); Eva Matoušková (Dept of Geomatics, Czech Technical University, Prague, Czech Republic); Vera Moitinho de Almeida (Science and Technology in Archaeology Research Center, Cyprus); and Meritxell Vilaseca (Polytechnic University of Catalonia, Barcelona, Spain).
The authors declare that they have no competing interests in the manuscript. A poster with preliminary analysis of the results was presented at the Second International SEAHA Conference in Oxford in June 2016.
This article arose out of a Short-Term Scientific Mission (STSM) conducted by Tatiana Vitorino when visiting University College London during a 2-week period in late October 2015. The research was carried out under the auspices of the European COST Action TD1201 Colour and Space in Cultural Heritage (COSCH). The project website is at http://www.cosch.info. Under the COST rules, TV received funding for travel and accommodation expenses, and all co-authors were able to claim travel expenses to attend the subsequent COSCH project meeting. No other funding was received from COSCH for labour or equipment and all work was done on a voluntary pro bono basis.
Faculty of Engineering Sciences, University College London, London, UK
Lindsay W. MacDonald
Faculty of Science and Technology, Universidade NOVA de Lisboa, Lisbon, Portugal
Tatiana Vitorino
IFAC-CNR, Via Madonna del Piano 10, Sesto Fiorentino, Italy
& Marcello Picollo
C2RMF, Musée du Louvre, Paris, France
Ruven Pillay
National Museum in Kraków, Al. 3 Maja 1, 30-062, Kraków, Poland
Michał Obarzanowski
& Joanna Sobczyk
Department of Physics, University of Minho, Campus de Gualtar, Braga, Portugal
Sérgio Nascimento
& João Linhares
Search for Lindsay W. MacDonald in:
Search for Tatiana Vitorino in:
Search for Marcello Picollo in:
Search for Ruven Pillay in:
Search for Michał Obarzanowski in:
Search for Joanna Sobczyk in:
Search for Sérgio Nascimento in:
Search for João Linhares in:
Correspondence to Lindsay W. MacDonald.
MacDonald, L.W., Vitorino, T., Picollo, M. et al. Assessment of multispectral and hyperspectral imaging systems for digitisation of a Russian icon. Herit Sci 5, 41 (2017) doi:10.1186/s40494-017-0154-1
Multispectral
Hyperspectral
Reflectance
Specularity
2nd International Conference on Science and Engineering in Arts, Heritage, and Archaeology | CommonCrawl |
Are Strings Scalar?
I have always considered strings scalar and the main reason for that is that in programming we treat string values a lot like primitive values. But this and this Wikipedia definitions, as far as I understand them, speak against the scalar nature of strings.
In programming scalar types are often seen as the opposite of object types and strings are usually counted as scalar, because they, as well as primitive types, have natural ordering, and thus can be compared and tested for equality by means of the language itself, as opposed to object types, which would usually require custom logic for those operations.
By scalar here I mean firstly the opposite of object types, but I want to know whether being scalar is a real property of strings or are we just calling them scalar in programming because their behavior resembles primitive types
This leaves me with the question: are strings of text scalar values from the point of view of theoretical computer science?
programming-languages strings
Robert Mugattarov
Robert MugattarovRobert Mugattarov
$\begingroup$ What's the context in which this came up? What definition of scalar are you using? Why do you want to know whether it's a scalar or not -- how will you use the answer? $\endgroup$ – D.W.♦ Sep 16 '15 at 15:49
$\begingroup$ What exactly do you mean by "scalar" here? Why do you think TCS would talk about the same thing as programming languages when they say "string"? $\endgroup$ – Raphael♦ Sep 16 '15 at 15:52
$\begingroup$ @D.W. In programming scalar types are often seen as the opposite of object types and strings are usually counted as scalar, because they, as well as primitive types, have natural ordering, and thus can be compared and tested for equality by means of the language itself, as opposed to object types, which would usually require custom logic for those operations. The reason I want to know whether strings are scalar is curiosity - I have always thought that strings do not exactly belong with the primitive types. $\endgroup$ – Robert Mugattarov Sep 16 '15 at 16:21
$\begingroup$ @RobertMugattarov, OK: I suggest that you edit the question to provide that context & clarification. That will definitely help attract better answers. Please don't just leave clarifications in the comments; edit the question. Questions should stand on their own -- people shouldn't have to read the comments to understand what you're asking. (However, I'm not sure what question that leaves... it sounds like you've answered your own question.) $\endgroup$ – D.W.♦ Sep 16 '15 at 16:24
$\begingroup$ Doesn't this depend entirely on the programming language being used? In C, for example, there are no objects, so I guess everything's a scalar; in Java, strings are objects. $\endgroup$ – David Richerby Sep 16 '15 at 19:32
I'm used to hearing the word "scalar" used in physics and linear algebra and such, not theoretical computer science. In that context, we're interested in distinguish between a scalar vs a vector: e.g., an element of $\mathbb{R}$ vs an element of $\mathbb{R}^d$. The word was invented to help us talk about that distinction.
However, that distinction doesn't seem particularly relevant here.
I haven't heard the word "scalar" used much in theoretical computer science.
To the extent that we invented a meaning for "scalar" in theoretical computer science, I'd probably reserve it for values that can be stored in $O(1)$ space, and where basic operations on those values can be performed in $O(1)$ time. For instance, an int would qualify under this. If that's the definition you have in mind, a string doesn't qualify, as it is a constant-space item and operations on it can't be done in constant time.
$\begingroup$ Regarding your last paragraph, that would only work out in the unit-cost model. Also, how do integers and strings really differ? $\mathbb{Z}$ and $\Sigma^*$ are isomorphic, after all (as long as $\Sigma$ is finite). $\endgroup$ – Raphael♦ Sep 16 '15 at 16:13
$\begingroup$ @Raphael, Yup. I was thinking of the RAM model -- or just measuring how long a real processor takes. By this definition an int would be a scalar (because it is fixed width; e.g. 32 bits or 64 bits wide), but an integer wouldn't be a scalar (because it is unbounded). However, I haven't heard the word "scalar" used much in computer science, and when I have heard it used, I haven't heard it be defined formally, so I don't know if this is n accepted definition -- this is just how I'd use it, if someone pointed a gun at me and forced me to use the word. $\endgroup$ – D.W.♦ Sep 16 '15 at 16:16
$\begingroup$ I'd (try to) take away their gun, but fair enough. ;) $\endgroup$ – Raphael♦ Sep 17 '15 at 9:08
So, I think scalar and object aren't particularly well defined here. What I think of it as is that scalar is a primitive type, that can't be broken down, and an object type is a composite type: it is built up from one or more other types.
A string is not inherently scalar, and how they are treated depends much on your choice of programming language.
On the one end, we have Haskell, where String is literally a synonym for [Char]: that is, a string is just a list of characters. In this sense, Char is a primitive type, and String is most definitely NOT scalar.
C is similar, where a string is just a pointer to an array of char, and C++ string objects are clearly not scalar since they are, well, objects.
However, some languages, like Java and JavaScript, distinguish strings and lists of characters. However, the main reason for doing this is efficiency. They make string its own primitive type, and do some weird stuff internally with interning, so that (in some cases), a string is only stored in one place in memory, and comparing them is as easy as checking pointer equality.
I would argue that this is more of an implementation detail more than anything. Usually, there is some sort of isomorphism between String and a sequence of Char, and there are non-scalar-style operations, like getting the nth character of a string. If you can do that, in some way your string is linked to a collection of characters, making it non-scalar.
Likewise, the fact that you can express string literals doesn't necessarily make them scalar. In Haskell, "Foo" is just syntactic sugar for 'F':'o':'o':[] i.e. the linked list of characters.
jmitejmite
Not the answer you're looking for? Browse other questions tagged programming-languages strings or ask your own question.
How many strings are close to a given set of strings?
Mutable Object Values in a Functional Interpreter
What are these special strings called?
Why are strings immutable in some languages?
on "On the cruelty of really teaching computing science"
Untyped vs Unityped
Mathematically determine if two strings are permutations of each other
Are all finite strings over some infinite alphabet countable?
Finding common structure in strings
Is there a generally accepted name for creating types that select a subset of other types? | CommonCrawl |
\begin{document}
\title[Minimal epimorphic subgroups]{On minimal epimorphic subgroups in simple algebraic groups of rank $2$} \author{I.I. Simion} \address[I.I. Simion]
{ Department of Mathematics\\
Babe{\c s}-Bolyai University\\
Str. Ploie\c sti 23-25, 400157, Cluj-Napoca, Rom\^ania}
\email{[email protected]} \author{D.M. Testerman} \address[D.M. Testerman]
{ Institute of Mathematics\\
\'Ecole Polytechnique F\'ed\'erale de Lausanne,
Station 8, Lausanne, CH-1015, Switzerland
}
\email{[email protected]} \thanks{Both authors acknowledge the support of the Swiss National Science Foundation through grant number ${\rm{IZSEZ0}}\textunderscore190091$. I.I. S. was also supported by a grant of the Romanian Ministry of Research, Innovation and Digitalization, CNCS/CCCDI–UEFISCDI, project number PN-III-P4-ID-PCE-2020-0454, within PNCDI III} \subjclass[2020]{Primary 20G05; Secondary 20G07} \keywords{simple linear algebraic group, epimorphic subgroup} \maketitle
\begin{abstract}
The category of linear algebraic groups admits non-surjective epimorphisms. For simple algebraic groups of rank $2$ defined over algebraically closed fields, we show that the minimal dimension of a closed epimorphic subgroup is $3$. \end{abstract}
\section{Introduction}
Let $\phi:H\rightarrow G$ be a homomorphism of linear algebraic groups defined over an algebraically closed field. The map $\phi$ is an epimorphism if it admits right cancellation, i.e. whenever $\psi_1\circ\phi=\psi_2\circ\phi$ for homomorphisms $\psi_1,\psi_2$ we have $\psi_1=\psi_2$. An \emph{epimorphic subgroup} $H\subseteq G$ is a subgroup for which the inclusion map is an epimorphism. If $\phi$ is a non-surjective epimorphism then $\phi(H)$ is a proper epimorphic subgroup of $G$.
A study of epimorphic subgroups in linear algebraic groups was initiated by Bien and Borel in \cite{BB1,BB2} where criteria for recognizing epimorphic subgroups are established (see \S\ref{subsec:criteria}). Their work follows \cite{Bergman}, where non-surjectivity of epimorphisms is studied for Lie algebras, and extends the list of categories for which the difference between surjectivity and epimorphicity was studied in \cite{Reid}. It follows from \cite{BB1} that minimal epimorphic subgroups are solvable and that maximal proper epimorphic subgroups are parabolic subgroups. Presently, a classification of epimorphic subgroups appears out of reach.
While maximal epimorphic subgroups are well understood, the current state of knowledge on minimal dimensional epimorphic subgroups is less satisfactory. In case $G$ is a simple algebraic group, the minimal dimension of such subgroups appears to be $3$. If the ground field is ${\mathbb C}$, this is shown to hold by Bien and Borel in \cite{BB1} (see \S\ref{subsec:criteria}). Here we consider simple groups of rank $2$ defined over fields of positive characteristic. \begin{thm}
\label{thm:epi}
For a simple algebraic group of rank $2$ defined over an algebraically closed field of characteristic $p>0$, the minimal dimension of a closed epimorphic subgroup is $3$. \end{thm} In general, for arbitrary rank, we expect that there are no $2$-dimensional closed epimorphic subgroups in simple algebraic groups. The existence of $3$-dimensional such subgroups is studied in \cite{DonnaAdam}.
\begin{comment}One characterization of epimorphic subgroups $H$ of $G$ says that on any rational $G$-module the fixed points of $G$ coincide with the fixed points of $H$ (see \S\ref{subsec:criteria}), in other words $H^{0}(G,V)=H^{0}(H,V)$ (see \cite[I\S4]{Jantzen_Reductive}) for any rational $G$-module. Moreover, if $H$ is not epimorphic, Lemma \ref{lem:induced} shows that there is an induced module $V$ on which the fixed points don't coincide. Thus we have the following direct consequence of Theorem~\ref{thm:epi}.
\begin{cor}
Let $G$ be a simple algebraic group of rank $2$ over an algebraically closed field of characteristic $p>0$. If $H$ is a closed subgroup of $G$ of dimension at most $2$ then there exists a nonzero dominant weight $\lambda$ for which
$$
H^{0}(H,H^{0}(\lambda))\neq 0.
$$
\end{cor} \end{comment}
For group schemes of finite type over a field, Brion \cite{Brion2017} gives a characterization of epimorphic subgroups and generalizes some of the criteria in \cite{BB1} for subgroups to be epimorphic.
In the context of Hopf algebras the analogous problem is that of non-injective monomorphisms. Non-surjective epimorphisms and non-injective monomorphisms of Hopf algebras are studied in \cite{Chirvasitu} and \cite{Agore}.
In \S\ref{sec:preliminaries} we recall known facts on epimorphic subgroups and fix the notation needed in the rest of the paper. In particular, in Corollary~\ref{cor:up_to_isogeny} we show that one may restrict to the consideration of simply connected groups $G$. Then, in Proposition~\ref{prop:non-existence}, for the groups $\operatorname{SL}_3(k)$, $\operatorname{Sp}_4(k)$ and $G_2(k)$ we establish the nonexistence of closed connected epimorphic subgroups of dimension at most $2$. For the existence statement, we exhibit such groups of dimension $3$ in Section~\ref{sec:existence}. Our calulations use the structure constants available in \textsc{GAP} \cite{GAP}. All calculations were checked both by hand and with \textsc{GAP}.
\section{Preliminaries} \label{sec:preliminaries}
Throughout, $G$ denotes a linear algebraic group over an algebraically closed field $k$ of characteristic $p\geq 0$.
\subsection{Criteria for epimorphic subgroups} \label{subsec:criteria} For a closed subgroup $H$ of $G$, the algebra $k[G]^{H}$ of regular functions on $G$ which are invariant under the right action of $H$ on $G$ identifies with the algebra $k[G/H]$ of regular functions on $G/H$. Bien and Borel established the following theorem.
\begin{thm}[{\cite[Th\'eor\`eme 1]{BB1}}]
\label{thm:epi_characterization}
Let $G$ be a connected linear algebraic group and let $H$ be a closed subgroup of $G$. The following conditions are equivalent:
\begin{enumerate}[{\rm (1)}]
\item\label{item:BB1} The subgroup $H$ is epimorphic in $G$.
\item\label{item:BB2} We have $k[G/H]=k$.
\item\label{item:BB3} The $k$-vector space $k[G/H]$ has finite dimension.
\item\label{item:BB4} The fixed points of $H$ in any rational $G$-module coincide with the fixed points of $G$.
\item\label{item:BB5} Any direct sum decomposition of a rational $G$-module $V$ as $H$-module is a direct sum decomposition of $V$ as $G$-module.
\end{enumerate} \end{thm}
As a consequence of the above theorem and of the definition, for a subgroup $H$ of $G$ we have the following criteria for epimorphicity (see \cite[\S2]{BB1}).
\begin{enumerate}[{\rm (1)}]
\setcounter{enumi}{5}
\item\label{item:BB6} For subgroups $H\subseteq L\subseteq G$, if $H$ is epimorphic in $L$ and $L$ is epimorphic in $G$ then $H$ is epimorphic in $G$.
\item\label{item:BB7} The subgroup $H$ is epimorphic in $G$ if and only if the connected component of the identity $H^{\circ}$ or a Borel subgroup of $H^{\circ}$ is epimorphic in $G$.
\item\label{item:BB8} If $G$ is simple, a proper subgroup $H$ is epimorphic if and only if the radical of $H^{\circ}$ is epimorphic in $G$.
\item\label{item:BB9} If $(G_{i})_{i\in I}$ is a family of closed subgroups which generate $G$ and if $H\cap G_{i}$ is epimorphic in $G_i$ for each $i\in I$ then $H$ is epimorphic in $G$. \end{enumerate}
The characterization in Theorem~\ref{thm:epi_characterization} shows that if $H$ is epimorphic in $G$ then $G/H$ is close to being projective. At the other extreme, the following result, established independently by Richardson and by Cline, Parshall and Scott, provides a criterion for determining when $G/H$ is affine.
\begin{thm}[{\cite[Theorem A]{Richardson77} and \cite[Corollary 4.5]{CPS77}}]
\label{thm:aff_characterization}
Let $H$ be a closed subgroup of the reductive group $G$.
Then $G/H$ is an affine variety if and only if $H$ is reductive. \end{thm}
\begin{cor}
\label{cor:reductive_proper}
Let $H$ be a closed subgroup of the reductive group $G$.
If $H$ is reductive and $\dim(H)\neq\dim (G)$ then
no closed subgroup of $H$ is epimorphic in $G$. \end{cor} \begin{proof}
By Theorem~\ref{thm:aff_characterization} and the assumption that $\dim(H)\neq\dim (G)$ the dimension of $k[G/H]$ is not finite. Thus $H$ is not epimorphic in $G$ by Theorem~\ref{thm:epi_characterization}.(3). If $M$ is a closed subgroup of $H$ then
$$
k[G/M]\cong k[G]^{M}\supseteq k[G]^{H}\cong k[G/H],
$$
hence $M$ cannot be epimorphic in $G$.
\end{proof}
The following consequence of Theorem~\ref{thm:aff_characterization} was observed by Bien and Borel in {\cite{BB1}. The proof follows from the fact that for $k={\mathbb C}$ any two-dimensional, non-abelian, non-unipotent closed connected subgroup of a simple algebraic group over $k$, lies in an $A_1$-type subgroup.
\begin{cor}
\label{cor:char_0}
There are no two-dimensional closed epimorphic subgroups in a simple algebraic group of rank at least $2$ defined over ${\mathbb C}$. \end{cor}
In the rest of the paper we assume that the characteristic of the ground field is positive, i.e. $p>0$.
\begin{comment}The proof of the following Lemma was pointed out to us by George McNinch.
\begin{lem}\label{lem:induced}
Let $G$ be a connected reductive algebraic group and let $H\subseteq G$ be a nonepimorphic subgroup of $G$. Then
there exists a nonzero dominant weight $\lambda$ for which the induced module $H^0(\lambda)$ has a nonzero fixed point for $H$.
\end{lem}
\begin{proof} We use the fact, \cite[II, 4.20]{Jantzen_Reductive}, that $k[G]$ has a good filtration, and that $H^0(0)$ occurs with multiplicity $1$ in this filtration. Now $H^0(0) = k$ is a submodule of $k[G]$ and has a good filtration as well. Thus by \cite[II, 4.17]{Jantzen_Reductive}, $k[G]/k$ also has a good filtration, none of whose factors are trivial. Let $0\subsetneq E_1\subsetneq E_2\cdots\subsetneq k[G]$ be such a good filtration with $E_1 = H^0(0)$, and for $i\geq 1$, $E_{i+1}/E_i\cong H^0(\lambda_i)$ for some dominant weight $\lambda_i\ne 0$.
Now since $H$ is not epimorphic in $G$, there exists $f\in k[G]^H$, $f$ nonconstant. Choose $i\geq 2$ minimal such that $f\in E_i$.
Then $f+E_{i-1}$ is a nonzero fixed point for $H$ in $E_i/E_{i-1}\cong H^0(\lambda_i)$, proving the claim. \end{proof}
\end{comment}
\subsection{Isogenies of $G$}
The proof of part (2) of the following lemma was communicated to us by Michel Brion.
\begin{prop}
\label{lem:up_to_bij_hom}
Let $\phi:G_1\rightarrow G_2$ be a surjective homomorphism of algebraic groups.
\begin{enumerate}[{\rm (1)}]
\item If $H_1$ is a closed epimorphic subgroup of $G_1$ then $\phi(H_1)$ is epimorphic in $G_2$.
\item If $H_2$ is a closed epimorphic subgroup of $G_2$ then $\phi^{-1}(H_2)$ is epimorphic in $G_1$.
\end{enumerate} \end{prop} \begin{proof}
Let $H_2$ denote the image $\phi(H_1)$ of the closed epimorphic subgroup $H_1$ of $G$. Any rational $G_2$-module $V$ is a rational $G_1$-module via $\phi$, thus $V^{H_2}=V^{H_1}=V^{G_1}=V^{G_2}$. Hence $H_2$ is epimorphic in $G_2$.
Suppose now that $H_2$ is a closed epimorphic subgroup in $G_2$ and denote $\phi^{-1}(H_2)$ by $H_1$. The map $\phi$ induces a bijective morphism
of varieties $G_1/H_1\rightarrow G_2/H_2$. Thus, the function field $k(G_1/H_1)$ is a purely inseparable finite extension of $k(G_2/H_2)$. Hence, since $p>0$, there exists a $p$-power $q$ such that $k(G_1/H_1)^q$ lies in $k(G_2/H_2)$. In particular $k[G_1/H_1]^q$ lies in $k(G_2/H_2)$. Since $G_2/H_2$ is a normal variety (see \cite[Lemma 5.3.4]{Springer}), it follows that $k[G_1/H_1]^q$ lies in $k[G_2/H_2]=k$. Hence, $k[G_1/H_1]=k$. \end{proof}
In what follows, we refer to the isomorphisms of abstract groups described in \cite[Theorem 28]{Steinberg} as \emph{exceptional isogenies}. These are non-separable bijective endomorphisms of simple algebraic groups and occur for groups of type $B_2$ and $F_4$ when $p=2$, and for groups of type $G_2$ when $p=3$.
\begin{cor}
\label{cor:up_to_isogeny}
Let $G$ be a simple algebraic group and let $H$ be a closed subgroup of $G$.
\begin{enumerate}[{\rm (1)}]
\item For any isogeny $\phi : G \to G'$, the subgroup $H$ is epimorphic in G if and only if $\phi(H)$ is epimorphic in $G'$.
\item The dimension of a minimal closed epimorphic subgroup of $G$ is independent of the isogeny type of $G$.
\end{enumerate} \end{cor}
\subsection{Root system} \label{subsec:root_system} Throughout we fix a maximal torus $T$ and a Borel subgroup $B$ of $G$, with $T\subseteq B$, and denote by $U$ the unipotent radical of $B$. The root system $\Phi$ is with respect to $T$ and the positive roots and the simple roots are with respect to $U$. For $\operatorname{SL}_3(k)$ the positive roots are $\alpha_1,\alpha_2,\alpha_3=\alpha_1+\alpha_2$, for $\operatorname{Sp}_4(k)$ the positive roots are $\alpha_1,\alpha_2,\alpha_3=\alpha_1+\alpha_2,\alpha_4=\alpha_1+2\alpha_2$ and for $G_2(k)$ they are $$ \alpha_1, \alpha_2, \alpha_3=\alpha_2+\alpha_1, \alpha_4=\alpha_2+2\alpha_1, \alpha_5=\alpha_2+3\alpha_1, \alpha_6=2\alpha_2+3\alpha_1. $$ The set of coroots is denoted by $\Phi^{\vee}$ and $\alpha_{i}^{\vee}:{\mathbb G}_m(k)\rightarrow T$ is the simple coroot corresponding to the simple root $\alpha_i$. Furthermore, we write $\omega_i$ for the fundamental dominant weight corresponding to the simple root $\alpha_i$, for $i=1,2$, and $s_\alpha$ for the reflection in the Weyl group $N_G(T)/T$ associated to the root $\alpha$. In addition, for groups of type $A_1$, we will identify the set of dominant weights with the set of nonnegative integers.
\section{Nonexistence of $2$-dimensional closed epimorphic subgroups} \label{sec:non-existence}
In this section we show that there are no epimorphic subgroups of dimension at most $2$ in simple algebraic groups of rank $2$. By Corollary~\ref{cor:up_to_isogeny}, it suffices to consider simply connected groups. Thus we treat $\operatorname{SL}_3(k)$, $\operatorname{Sp}_4(k)$ and $G_2(k)$. Moreover, by \S\ref{subsec:criteria}.\eqref{item:BB7}, it suffices to show the nonexistence of closed connected epimorphic subgroups. If the dimension of such a subgroup is at most $2$, it is necessarily solvable. For a closed connected solvable subgroup $H$, we may assume that $H\subseteq B$. Then, the unipotent radical $U_H$ of $H$ lies in $U$ and we may fix a maximal torus $T_H$ of $H$ lying in $T$. Clearly, $H$ is the semidirect product $U_H\rtimes T_H$.
Some cases can easily be seen not to appear as epimorphic subgroups.
\begin{lem}
\label{lem:mixed_2_dim}
Let $H$ be a closed subgroup of a simple algebraic group $G$ with $\dim H\leq 2$. If all elements in $H$ are semisimple or if all elements in $H$ are unipotent or if $H$ is abelian then $H$ is not epimorphic in $G$. \end{lem} \begin{proof}
By \S\ref{subsec:criteria}.(7), we may assume that $H$ is connected. If $H$ is abelian then either $H$ is a torus, or $H=H_u$ is unipotent or $H$ is $2$-dimensional and $H=H_sH_u$ with $H_s$ a torus and $H_u\cong {\mathbb G}_a(k)$. In the latter two cases we notice that $H$ fixes $\operatorname{Lie}(H_u)$ but $G$ doesn't, so by Theorem~\ref{thm:epi_characterization}.(4) $H$ is not epimorphic in $G$. If $H$ is a torus then $H$ acts completely reducibly on every $G$-module, so by
Theorem~\ref{thm:epi_characterization}.(5) is not epimorphic in $G$. \end{proof}
By Lemma~\ref{lem:mixed_2_dim}, for closed subgroups $H$ of simple algebraic groups $G$, it suffices to treat the cases where $H=U_H\rtimes T_H$ is a proper, i.e. non-abelian, semidirect product with $\dim(U_H)=\dim(T_H)=1$. The arguments to follow make use of surjective homomorphisms $$ t:k^{\times}\rightarrow T_H\subseteq T \quad\text{and}\quad u:k\rightarrow U_H\subseteq U. $$ Here $t\in \operatorname{Hom}({\mathbb G}_m(k),T)$ is a cocharacter \cite[II,Ch1,\S1.6]{Jantzen_Reductive}. Thus, if $G$ is simply connected of rank $r$, then $\operatorname{Hom}({\mathbb G}_m(k),T)={\mathbb Z}\Phi^{\vee}$ and $$ t=m_1\alpha_1^{\vee}+m_2\alpha_2^{\vee}+\dots+m_r\alpha_r^{\vee}, $$ a linear combination of simple coroots with $m_1,\dots,m_r\in{\mathbb Z}$, i.e. for any $\lambda\in k^{\times}$ $$ t(\lambda)=\alpha_1^{\vee}(\lambda^{m_1})\alpha_2^{\vee}(\lambda^{m_2})\cdots \alpha_r^{\vee}(\lambda^{m_r}). $$ Describing the map $u$ onto the subgroup $U_H$ is more delicate. Since the ground field is algebraically closed, any $1$-dimensional closed connected unipotent group is isomorphic to ${\mathbb G}_{a}(k)$.
Let $N=|\Phi^{+}|$. We have an isomorphism between the varieties $k^N$ and $U$, defined by $$ (x_1,\dots,x_{N})\mapsto\prod_{i=1}^{N}u_{i}(x_i) $$ for a fixed (but arbitrary) ordering of the positive roots and for fixed isomorphisms onto the root subgroups $u_{i}:k\rightarrow U_{\alpha_i}$. Thus, by \cite[Lemma 3.6]{Hartshorne}, for the morphism of varieties $u:k\rightarrow U$ we have \begin{equation} \label{u_parametrization} u(x)=\prod_{i=1}^{N} u_{i}(P_i(x)) \end{equation} for polynomials $P_i(x)$. Notice that while the maps $t$ and $u$ may not be isomorphisms, we may assume that our subgroup $H$ lies in the image of some $\mu\circ(t\times u)$ where $\mu$ is the multiplication map in $G$. Moreover, imposing the condition that $\mu\circ(t\times u)$ is a homomorphism of groups we obtain an isogeny onto $H$.
Now, since $U_{H}\cong{\mathbb G}_a(k)$, there is an integer $m\neq 0$ such that for any $\lambda\in k^{\times}$ and $x\in k$ we have
$$ {}^{t(\lambda)}u(x)=u(\lambda^{m} x). $$
Hence, for all $\lambda$ and $x$ we have $$
\prod_i u_i(P_i(\lambda^{m}x))=u(\lambda^{m}x)={}^{t(\lambda)}u(x)=\prod_i u_i(\lambda^{n_i}P_i(x)),
$$ for some integers $n_i$. Therefore $P_i(\lambda^{m}x)=\lambda^{n_i}P_i(x)$ for each $i$, thus $P_i(x)=c_ix^{q_i}$ and \begin{equation} \label{unipotent_poly} u(x)=\prod_i u_i(c_ix^{q_i}) \end{equation} for some constants $c_i\in k$ and integers $q_i\geq 0$. In the rest of the paper we focus on groups of rank $2$. Henceforth we assume that $$G\text{ is simple of rank }2.$$
By the above, the map $u$ is determined by $c_1,\dots,c_{N}\in k$ and by the integers $q_1,\dots,q_{N}\geq 0$, while the map $t$ is determined by $m_1,m_2\in{\mathbb Z}$. Imposing the condition that $\mu\circ(t\times u)$ is a homomorphism of groups translates into conditions on these parameters. In Lemma~\ref{lem:structure} we pin down the possible values for these parameters and subsequently, in Proposition~\ref{prop:non-existence}, we use this to show that the corresponding subgroups are not epimorphic. For such an analysis we need the following lemmas.
\begin{lem} \label{lemma0} Let $q_1,\dots,q_5$ be nonnegative integers with $q_1$ and $q_2$ powers of $p$. Let $c,c_1,c_2\in k$ with $c_1,c_2\neq 0$ and let $z\geq 1$ be an integer. Consider the polynomial $P=c(a+b)^{z}-ca^{z}-cb^{z}$ in $a$ and $b$.
\begin{enumerate}[{\rm (1)}] \item If $P=c_1a^{q_2}b^{q_1}$ then $z=2q_1=2q_2$, $p\neq 2$ and $c=c_1/2$. \item If $P=c_1\left(a^{q_1}b^{2q_1}+a^{2q_1}b^{q_1}\right)$ then $z=3q_1$, $p\neq 3$ and $c=c_1/3$. \item If $P=c_1\left( 2a^{q_1}b^{3q_1}+3a^{2q_1}b^{2q_1}+2a^{3q_1}b^{q_1}\right)$ and $p\neq 2$ then $z=4q_1$ and $c=c_1/2$. \item If $P=c_1\left(a^{q_1}b^{4q_1}+2a^{2q_1}b^{3q_1}+2a^{3q_1}b^{2q_1}+a^{4q_1}b^{q_1}\right)$ and $p\neq 2$ then $z=5q_1$, $p\neq 5$, $c=c_1/5$. \item If $P=c_1a^{q_3}b^{2q_1}+c_2a^{q_4}b^{q_1}$ then $z=3q_1=3q_3$, $q_4=2q_1$, $p\neq 3$, and $c=c_1/3=c_2/3$. \item If $P=c_1a^{q_4}b^{q_1}+c_2a^{q_5}b^{q_2}$ then one of the following holds:
\begin{enumerate}
\item[{\rm (I)}] $z$ is a power of $p$, $q_4=q_5$, $q_1=q_2$ and $c_1+c_2=0$, or
\item[{\rm (II)}]$z=2q_1$, $q_1=q_2=q_4=q_5$, $p\neq 2$ and $c=(c_1+c_2)/2$, or
\item[{\rm (III)}] $q_5=q_1\neq q_2=q_4$.
\end{enumerate} \end{enumerate} \end{lem}
\begin{proof} Notice that in (1)--(5) the constant $c$ is different from zero. Let $z=xy$ with $y$ the highest power of $p$ dividing $z$. Notice also that for (1)--(5) we have $x>1$ since $c_1,c_2\neq 0$. Since $$ (a+b)^{z}=(a+b)^{xy}=(a^{y}+b^{y})^{x}=a^{xy}+xa^{y}b^{y(x-1)}+\dots +xa^{y(x-1)}b^{y}+b^{xy} $$ and since $x$ is prime to $p$, one of the monomials in the expression of $P$ must equal $xa^{y}b^{y(x-1)}$ in each case. Thus, for (1) we have $y(x-1)=y$, so $x=2$ and $q_1=q_2$. Hence $p\neq 2$ and $c=c_1/2$. For (2) we have $a^{y}b^{y(x-1)}=a^{q_1}b^{2q_1}$, hence $y=q_1$ and $x=3$. Therefore $p\neq 3$ and $c=c_1/3$. For (3) we have $a^{y}b^{y(x-1)}=a^{q_1}b^{3q_1}$, hence $y=q_1$, $x=4$ and $c=c_1/2$. For (4) we have $a^{y}b^{y(x-1)}=a^{q_1}b^{4q_1}$, hence $y=q_1$ and $x=5$. Therefore $p\neq 5$ and $c=c_1/5$. For (5) we have $a^{y}b^{y(x-1)},a^{y(x-1)}b^{y}\in \{a^{q_3}b^{2q_1},a^{q_4}b^{q_1}\}$. Hence $y=q_1$, $x=3$, $q_3=q_1$ and $q_4=2q_1$. Therefore $p\neq 3$ and $c=c_1/3=c_2/3$.
For (6), if $q_1=q_2$, $q_4=q_5$ and $c_1+c_2=0$ then $z$ is a power of $p$. Else $P\neq 0$ and we have $a^{y}b^{y(x-1)},a^{y(x-1)}b^{y}\in \{a^{q_4}b^{q_1},a^{q_5}b^{q_2}\}$. Since $p\mid {x\choose i}\Leftrightarrow p\mid {x\choose x-i}$, we have $q_4=q_2$ and $q_5=q_1$ which are powers of $p$. If $q_1\neq q_2$ we are in Case (III). For $q_1=q_2$ we have $$ c(a+b)^{z}-ca^{z}-cb^{z} = (c_1+c_2)a^{q_1}b^{q_1}. $$ Then, if $c_1+c_2=0$ we are in Case (I). Else, if $c_1+c_2\neq 0$, by (1), $p\neq 2$, $c=(c_1+c_2)/2$ and $z=2q_1$. This is Case (II). \end{proof}
\begin{lem} \label{lemmap} Let $p$ be a prime and $f\geq 1$ an integer. Then $$ \frac{2^{f+2}-1}{3},\frac{2^{f+1}+1}{3},\frac{p^f+1}{2},\frac{p^{f+1}+3}{2},\frac{3p^f+1}{2} $$ are not powers of $2,2,p,p,p$ respectively. \end{lem} \begin{proof}
If for some integer $m$ we have $2^{m}=(2^{f+2}-1)/3$,
then $m\geq f$ and $2^f(3\cdot 2^{m-f}-4)=-1$ which is not possible. The second case is similar.
If there is $m$ such that $2p^{m}=p^f+1$
then $p\neq 2$, $m\geq f$ and $p^f(2p^{m-f}-1)=1$.
Thus, $f$ has to equal $0$, a contradiction.
If there is $m$ such that $2p^{m}=p^{f+1}+3$
then $p\neq 2$, $m\geq f+1$ and $p^{f+1}(2p^{m-f-1}-1)=3$ which is not possible.
If there is $m$ such that $2p^{m}=3p^f+1$
then $p\neq 2$, $m\geq f$ and $p^f(2p^{m-f}-3)=1$.
Thus, $f$ has to equal $0$, a contradiction. \end{proof}
\begin{lem}
\label{lem:root_group_case}
If $U_H$ is a root subgroup of $G$ then $H$ is not epimorphic in $G$. \end{lem} \begin{proof}
If $U_H$ is a root group, $H$ lies in a reductive proper subgroup of $G$, namely a proper Levi subgroup, and the claim follows from Corollary~\ref{cor:reductive_proper}.
\end{proof}
\begin{lem}
\label{cr_is_one}
Let $u$ be as in \eqref{unipotent_poly}. For any two distinct roots $\alpha_i$ and $\alpha_j$, conjugating by an element in $T$ we may assume that $c_i=c_j=1$. \end{lem} \begin{proof} This is a consequence of the fact that two distinct positive roots are linearly independent in the character group $X(T)$, that $\dim(T)\geq 2$ and that the field is algebraically closed. \end{proof}
We are now in a position to describe the possibilities for a two-dimensional closed connected non-abelian subgroup $H$ (of a rank 2 simply connected simple algebraic group) which contains both unipotent and semisimple elements. With the discussion at the beginning of the section, we may describe such a group by indicating the tuples $(q_1,q_2,\dots)$, $(c_1,c_2,\dots)$ and $(m_1,m_2)$.
\begin{lem} \label{lem:structure} If $U_H$ is not a root group, then the only possibilities for $u:k\rightarrow U_H$ and $t:k^{\times}\rightarrow T_H$, up to conjugacy and exceptional isogenies, are given in Tables $\ref{tab:A2_cases}$, $\ref{tab:B2_cases}$ and $\ref{tab:G2_cases}$ for $G=\operatorname{SL}_3(k)$, $\operatorname{Sp}_4(k)$, $G_2(k)$, respectively, where $m\neq 0$ is an integer, $q_i=p^{f_i}$ for some integers $f_i\geq 0$ and where $c_i\in k$ and $d_i\in k^{\times}$ for $1\leq i\leq 6$. \end{lem}
\begin{proof}
Note that since $T_H$ lies in $T$, it suffices to show that $U_H$ is conjugate by an element of $N_G(T)$ to some group in the tables. In particular, we may assume that $U_H$ has a nontrivial projection onto $U_\alpha$, for some simple root $\alpha$.
\FloatBarrier \renewcommand{1.3}{1.3} \begin{table}[ht]
\begin{tabular}{| c | l | l | l | l |}
\hline
Case & $q_i$ & $c_1,c_2,c_3$ & $m_i/m$ & $p$\\
\hline
1 & $q_1,q_1,2q_1$ & $1,1,\frac{1}{2}$ & $q_1,q_1$ & $p\geq 3$\\
\hline
2 & $q_1,-,q_3$ & $1,0,1$ & $\frac{1}{3}(q_1+q_3),\frac{1}{3}(2q_3-q_1)$ & any \\
\hline
\end{tabular} \caption{$H$ in $\operatorname{SL}_3(k)$}
\label{tab:A2_cases} \end{table} \FloatBarrier
\noindent{\textbf{Cases for $\operatorname{SL}_3(k)$:}} Using the additivity of $u:k\rightarrow U_H$, $u(a)u(b)=u(a+b)$, and identifying the projections onto the root groups, we obtain the following equations
\begin{equation}
\label{eq:add_sys_a2}
\left\{
\begin{aligned}
&c_1(a+b)^{q_1}
=
c_1a^{q_1}+c_1b^{q_1}
\\
&c_2(a+b)^{q_2}
=
c_2a^{q_2}+c_2b^{q_2}
\\
&c_3(a+b)^{q_3}
=
c_3a^{q_3}+c_3b^{q_3}+c_1c_2a^{q_2}b^{q_1}.
\end{aligned}
\right.
\end{equation}
\medbreak
\noindent{\textbf{Case 1:}} If $c_1\neq 0$ and $c_2\neq 0$, by Lemma~\ref{cr_is_one} we may assume that $c_1=c_2=1$. Moreover $q_1$ and $q_2$ are powers of $p$. Thus, from the last equation in \eqref{eq:add_sys_a2} we see that $p\neq 2$, $q_1=q_2=q_3/2$ and $c_3=1/2$. The compatibility of the torus $T_H$ with $U_H$ restricts the possibilities for $t:k^{\times}\rightarrow T_H$ and we get that $m_1=m_2=mq_1$.
\medbreak
\noindent{\textbf{Case 2:}} If $c_1\neq 0$ and $c_2=0$, since we exclude root groups, we may assume that $c_3\neq 0$. By Lemma~\ref{cr_is_one}, we may assume that $c_1=c_3=1$. Again, $q_1$ and $q_3$ are powers of $p$ and the compatibility with $T_H$ implies that $m_1=m(q_1+q_3)/3$, $m_2=m(2q_3-q_1)/3$. The case where $c_1=0$ and $c_2\neq0$ is covered by conjugating by an element of $N_G(T)$.
\noindent{\textbf{Cases for $\operatorname{Sp}_4(k)$:}} Using the additivity of $u:k\rightarrow U_H$, $u(a)u(b)=u(a+b)$, and identifying the projections onto the root groups, we obtain the following system:
\begin{equation}
\label{eq:add_sys_b2}
\left\{
\begin{aligned}
&c_1(a+b)^{q_1}
=
c_1a^{q_1}+c_1b^{q_1}
\\
&c_2(a+b)^{q_2}
=
c_2a^{q_2}+c_2b^{q_2}
\\
&c_3(a+b)^{q_3}
=
c_3a^{q_3}+c_3b^{q_3}-c_1c_2a^{q_2}b^{q_1}
\\
&c_4(a+b)^{q_4}
=
c_4a^{q_4}+c_4b^{q_4}-c_1c_2^2a^{2q_2}b^{q_1}
-2c_2^2c_1a^{q_2}b^{q_1+q_2}+2c_2c_3a^{q_3}b^{q_2}.
\end{aligned}
\right.
\end{equation}
\noindent{\textbf{Case 1:}} If $c_1\neq 0$ and $c_2\neq 0$, by Lemma~\ref{cr_is_one} we may assume that $c_1=c_2=1$. Moreover, by the first two equations, $q_1$ and $q_2$ are powers of $p$. Applying Lemma~\ref{lemma0}.(1) we have $c_3\neq 0$, $p\neq 2$, $q_3=2q_1=2q_2$ and $c_3=-1/2$. Thus, the last equation in \eqref{eq:add_sys_b2} is $$ c_4a^{q_4}+c_4b^{q_4}-2a^{2q_1}b^{q_1}-2a^{q_1}b^{2q_1}=c_4(a+b)^{q_4}. $$ Since $p\neq 2$, from Lemma~\ref{lemma0}.(2) it follows that $p\neq 3$, $c_4=-2/3$ and $q_4=3q_1$. Imposing the compatibility with $T_H$, we get $m_1=2mq_1$, $m_2=3mq_1/2$.
\renewcommand{1.3}{1.3} \begin{table}[ht]
\begin{tabular}{| c | l | l | l | l |}
\hline
Case & $q_i$ & $c_1,c_2,c_3,c_4$ & $m_i/m$ & $p$\\
\hline
1 & $q_1,q_1,2q_1,3q_1$ & $1,1,-\frac{1}{2},-\frac{2}{3}$ & $2q_1,\frac{3}{2}q_1$ & $p\geq 5$\\
\hline
2 & $q_1,-,-,q_4$ & $1,0,0,1$ & $\frac{1}{2}(q_1+q_4),\frac{1}{2}q_4$ & any \\
\hline
3 & $q_1,-,q_1,q_1$ & $1,0,1,d_4$ & $q_1,\frac{1}{2}q_1$ & any \\
\hline
4 & $q_1,-,q_3,-$ & $1,0,1,0$ & $q_3,\frac{1}{2}(2q_3-q_1)$ & any \\
\hline
5 & $-,q_2,q_2,2q_2$ & $0,1,1,1$ & $q_2,q_2$ & $p\geq 3$ \\
\hline
6 & $-,q_2,q_2,2q_2$ & $0,1,1,d_4$ & $q_2,q_2$ & $p=2$ \\
\hline
\end{tabular} \caption{$H$ in $\operatorname{Sp}_4(k)$}
\label{tab:B2_cases} \end{table}
\noindent{\textbf{Case 2:}} Consider System \eqref{eq:add_sys_b2} with $c_1\neq 0$ and $c_2=c_3=0$. Since we exclude root groups, by Lemma~\ref{cr_is_one} we may assume that $c_1=c_4=1$, and we have that $q_1$, $q_4$ are $p$-powers. The compatibility with $T_H$ gives $m_1=m(q_1+q_4)/2$, $m_2=mq_4/2$. \medbreak \noindent{\textbf{Case 3:}} If $c_1,c_3,c_4\neq0$ and $c_2=0$ we may assume that $c_1=c_3=1$ by Lemma~\ref{cr_is_one}, and so $q_1$ and $q_3$ are $p$-powers. Since $c_4\neq 0$, the compatibility with $T_H$ yields $m_1=mq_3$, $m_2=mq_4/2$ and $2q_3=q_4+q_1$. The condition $2q_3=q_4+q_1$ implies that $q_1=q_4$, hence $q_1=q_3=q_4$. Indeed if $q_i=p^{f_i}$ and $q_4\geq q_1$, then $2q_3=q_1(p^{f_4-f_1}+1)$ and by Lemma~\ref{lemmap}, this equation doesn't have solutions unless $f_4=f_1$. \medbreak \noindent{\textbf{Case 4:}} Consider System \eqref{eq:add_sys_b2} with $c_1,c_3\neq0$ and $c_2,c_4=0$. Again we may assume that $c_1=c_3=1$ by Lemma~\ref{cr_is_one}, so $q_1$ and $q_3$ are again $p$-powers. From the compatibility with $T_H$ we get $m_1=mq_3$, $m_2=m(2q_3-q_1)/2$. Conjugating by an element of $N_G(T)$, we obtain also the case where $c_1=c_3=0$ and $c_2\neq 0$. Indeed, since we exclude root groups, in this case $c_4\neq 0$ and by Lemma~\ref{cr_is_one} we may assume that $c_2=c_4=1$. Conjugating with a representative of $s_{\alpha_2}s_{\alpha_1}$ in $N_G(T)$, we obtain the subgroups treated previously. \medbreak \noindent{\textbf{Cases 5,6:}} Let $c_1=0$, $c_2,c_3\neq 0$ and assume first that $p\neq 2$. Then $c_2=c_3=1$ by Lemma~\ref{cr_is_one}, and here we find that $q_2$ and $q_3$ are $p$-powers. By Lemma~\ref{lemma0}.(1) we have $c_4\neq 0$, $q_2=q_3$, $q_4=2q_2$ and $c_4=1$. The compatibility with $T_H$ gives $m_1=m(q_4-q_2)$, $m_2=mq_4/2$.
Now let $c_1=0$, $c_2,c_3\neq 0$ and $p=2$. Again, $c_2=c_3=1$ by Lemma~\ref{cr_is_one}. Moreover, System \eqref{eq:add_sys_b2} shows that $q_2$, $q_3$ and $q_4$ are powers of $p$ and $c_4$ is arbitrary. For $c_4=0$, the group $U_H$ is obtained from the group described in Case 2 by applying an exceptional isogeny which interchanges the long and the short root groups. For the case when $c_4\neq 0$, the compatibility with the torus gives $q_4=q_2+q_3$. Thus, since $q_2,q_3,q_4$ are powers of $p=2$, we must have $q_4=2q_2=2q_3$.
\renewcommand{1.3}{1.3} \begin{table}[ht]
\resizebox{350pt}{!}{
\begin{tabular}{| c | l | l | l | l |}
\hline
Case & $q_i$ & $c_1,c_2,c_3,c_4,c_5,c_6$ & $m_i/m$ & $p$\\
\hline
1 & $q_1,q_1,2q_1,3q_1,4q_1,5q_1$ & $1,1,\frac{1}{2},\frac{1}{3},\frac{1}{4},-\frac{1}{10}$ & $3q_1,5q_1$ & $\geq 7$\\
\hline
2 & $q_1,-,q_1,2q_1,3q_1,3q_1$ & $1,0,1,1,1,-2$ & $2q_1,3q_1$ & $\geq 5$ \\
\hline
3 & $q_1,-,q_1,2q_1,3q_1,-$ & $1,0,1,1,1,0$ & $2q_1,3q_1$ & $=2$ \\
\hline
4 & $q_1,-,-,q_1,2q_1,q_1$ & $1,0,0,1,\frac{3}{2},d_6$ & $q_1,q_1$ & $\geq 5$ \\
\hline
5 & $q_1,-,-,q_1,2q_1,-$ & $1,0,0,1,\frac{3}{2},0$ & $q_1,q_1$ & $\geq 5$ \\
\hline
6 & $q_1,-,-,-,2q_1,q_1$ & $1,0,0,0,1,d_6$ & $q_1,q_1$ & $=2$ \\
\hline
7 & $q_1,-,-,-,q_5,-$ & $1,0,0,0,1,0$ & $q_5-q_1,2q_5-3q_1$ & any \\
\hline
8 & $q_1,-,-,-,3q_1,3q_1$ & $1,0,0,0,1,d_6$ & $2q_1,3q_1$ & $=3$ \\
\hline
9 & $q_1,-,-,-,-,q_6$ & $1,0,0,0,0,1$ & $\frac{1}{2}(q_1+q_6),q_6$ & any \\
\hline
10 & $-,q_2,q_2,q_2,q_2,2q_2$ & $0,1,1,d_4,d_5,\frac{1}{2}d_5$ & $q_2,2q_2$ & $=3$ \\
\hline
11 & $-,q_2,q_2,q_2,q_2,2q_2$ & $0,1,1,d_4,d_4,d_6$ & $q_2,2q_2$ & $=2$ \\
\hline
12 & $-,q_2,q_2,q_2,q_2,-$ & $0,1,1,d_4,3d_4,0$ & $q_2,2q_2$ & $\neq 3$ \\
\hline
13 & $-,q_2,q_2,q_2,q_2,2q_2$ & $0,1,1,d_4,d_5,\frac{1}{2}(d_5-3d_4)$ & $q_2,2q_2$ & $\geq 5$ \\
\hline
14 & $-,q_2,q_2,q_2,-,2q_2$ & $0,1,1,d_4,0,-\frac{3}{2}d_4$ & $q_2,2q_2$ & $\geq 5$ \\
\hline
15 & $-,q_2,q_2,-,-,2q_2$ & $0,1,1,0,0,d_6$ & $q_2,2q_2$ & $=2$ \\
\hline
16 & $-,2q_3,q_3,-,-,q_3$ & $0,1,1,0,0,d_6$ & $0,q_3$ & $=2$ \\
\hline
17 & $-,q_2,-,q_2,q_2,2q_2$ & $0,1,0,1,d_5,\frac{1}{2}d_5$ & $q_2,2q_2$ & $\geq 3$ \\
\hline
18 & $-,q_2,-,q_2,-,2q_2$ & $0,1,0,1,0,d_6$ & $q_2,2q_2$ &$=2$ \\
\hline
19 & $-,3q_4,-,q_4,-,3q_4$ & $0,1,0,1,0,d_6$ & $q_4,3q_4$ &$=3$ \\
\hline
20 & $-,q_2,-,-,q_2,2q_2$ & $0,1,0,0,1,\frac{1}{2}$ & $q_2,2q_2$ & $\geq 3$ \\
\hline
21 & $-,q_2,-,-,-,q_6$ & $0,1,0,0,0,1$ & $\frac{1}{3}(2q_6-q_2),q_6$ & any \\
\hline
\end{tabular}
} \caption{$H$ in $G_2(k)$} \label{tab:G2_cases} \end{table}
\medbreak \noindent{\textbf{Cases for $G_2(k)$:}} Imposing the condition that $u:k\rightarrow U_H$ is a homomorphism of groups, $u(a+b)=u(a)u(b)$, we obtain the following equations:
\begin{equation}
\label{eq:add_sys_g2}
\left\{
\begin{aligned}
&c_1(a+b)^{q_1}
=
c_1a^{q_1}+c_1b^{q_1}
\\
&c_2(a+b)^{q_2}
=
c_2a^{q_2}+c_2b^{q_2}
\\
&c_3(a+b)^{q_3}
=
c_3a^{q_3}+c_3b^{q_3}+c_1c_2a^{q_2}b^{q_1}
\\
&c_4(a+b)^{q_4}
=
c_4a^{q_4}+c_4b^{q_4}+c_1^{2}c_2a^{q_2}b^{2q_1}+2c_1c_3a^{q_3}b^{q_1}
\\
&c_5(a+b)^{q_5}
=
c_5a^{q_5}+c_5b^{q_5}+c_1^{3}c_2a^{q_2}b^{3q_1}+3c_1^{2}c_3a^{q_3}b^{2q_1}
+3c_4c_1a^{q_4}b^{q_1}
\\
&c_6(a+b)^{q_6}
=
c_6a^{q_6}+c_6b^{q_6}
-c_1^3c_2^2a^{2q_2}b^{3q_1}
+c_2^2c_1^{3}a^{q_2}b^{3q_1+q_2}
\\
&\qquad
-3c_1^2c_2c_3a^{q_2+q_3}b^{2q_1}
-3c_1^2c_2c_3a^{q_2}b^{2q_1+q_3}
+3c_1^2c_2c_3a^{q_3}b^{2q_1+q_2}
\\
&\qquad
-3c_1c_3^2a^{2q_3}b^{q_1}
-6c_1c_3^2a^{q_3}b^{q_1+q_3}
+3c_1c_2c_4a^{q_4}b^{q_1+q_2}
\\
&\qquad
-3c_4c_3a^{q_4}b^{q_3}
+c_2c_5a^{q_5}b^{q_2}.
\end{aligned}
\right.
\end{equation}
\medbreak
\noindent{\textbf{Case 1:}}
Consider System \eqref{eq:add_sys_g2} with $c_1,c_2\neq 0$. By Lemma~\ref{cr_is_one}, we may assume that $c_1=c_2=1$, and $q_1, q_2$ are $p$-powers. The third equation in \eqref{eq:add_sys_g2} and Lemma~\ref{lemma0}.(1) show that $p\neq 2$, $c_3=1/2$ and $q_1=q_2=q_3/2$. Then, by the fourth equation and Lemma~\ref{lemma0}.(2), we have $p\neq 3$, $q_4=3q_1$ and $c_4=1/3$. Hence, by Lemma~\ref{lemma0}.(3), we must have $c_5=1/4$ and $q_5=4q_1$. Thus, the last two equations are $$ \left\{ \begin{aligned}
&\frac{1}{4}(a+b)^{q_5}
=
\frac{1}{4}a^{q_5}+\frac{1}{4}b^{q_5}+a^{q_1}b^{3q_1}+\frac{3}{2}a^{2q_1}b^{2q_1}+a^{3q_1}b^{q_1}
\\
&
c_6(a+b)^{q_6}
=
c_6a^{q_6}+c_6b^{q_6}
-\frac{1}{2}a^{q_1}b^{4q_1}
-a^{2q_1}b^{3q_1}
-a^{3q_1}b^{2q_1}
-\frac{1}{2}a^{4q_1}b^{q_1}. \end{aligned} \right. $$ By Lemma~\ref{lemma0}.(4), $p\neq 5$, $q_6=5q_1$ and $c_6=-1/10$. Then, imposing the compatibility condition with the map $t:k^{\times}\rightarrow T_H$, we get $m_1=3mq_1$, $m_2=5mq_1$. \medbreak \noindent{\textbf{Case 2:}} Let $c_1,c_3\neq 0$, $c_2=0$ and $p\neq 2$. We may assume that $c_1=c_3=1$ by Lemma~\ref{cr_is_one}, and $q_1,q_3$ are $p$-powers. By Lemma~\ref{lemma0}.(1), $q_1=q_3$, $q_4=2q_1$ and $c_4=1$ and System \eqref{eq:add_sys_g2} is $$ \left\{ \begin{aligned}
&(a+b)^{q_1}
=
a^{q_1}+b^{q_1}
\\
&(a+b)^{2q_1}
=
a^{2q_1}+b^{2q_1}+2a^{q_1}b^{q_1}
\\
&c_5(a+b)^{q_5}
=
c_5a^{q_5}+c_5b^{q_5}+3a^{q_1}b^{2q_1}+3a^{2q_1}b^{q_1}
\\
&c_6(a+b)^{q_6}
=
c_6a^{q_6}+c_6b^{q_6}
-3a^{2q_1}b^{q_1}
-6a^{q_1}b^{2q_1}
-3a^{2q_1}b^{q_1}. \end{aligned} \right. $$ If $p> 3$, by Lemma~\ref{lemma0}.(2) we have $c_5=1$, $q_5=3q_1$. Thus, by Lemma~\ref{lemma0}.(2), we have $c_6=-2$ and $q_6=3q_1$. The compatibility with $T_H$ gives $m_1=2mq_1$, $m_2=3mq_1$. If $p=3$, we have four cases according to whether or not $c_5$ and $c_6$ are zero. Moreover, there is an exceptional isogeny $\phi$ interchanging the long and the short root groups. If $c_5,c_6\neq 0$ then $U_H$ maps under $\phi$ to a corresponding group described in Case 10. The two cases $(c_5=0,c_6\neq 0)$ and $(c_5\neq 0,c_6= 0)$ are conjugate by an element of $N_G(T)$ corresponding to a reflection $s_{\alpha_2}$. Consider the case $(c_5=0,c_6\neq 0)$ and notice that by System \eqref{eq:add_sys_g2}, $q_6$ is also a power of $p=3$. Notice also, that the compatibility with $T_H$ gives $q_6=3q_1$. Thus, in this case $U_H$ is mapped under $\phi$ to a corresponding group described in Case 17. The case $(c_5=0,c_6=0)$ is covered by Case 20 if we apply $\phi$. \medbreak \noindent{\textbf{Case 3:}} Let $c_1,c_3\neq 0$, $c_2=0$ and $p=2$. As before, we may assume that $c_1=c_3=1$ and $q_1,q_3$ are $p$-powers. System \eqref{eq:add_sys_g2} is $$ \left\{ \begin{aligned}
&(a+b)^{q_1}
=
a^{q_1}+b^{q_1}
\\
&(a+b)^{q_3}
=
a^{q_3}+b^{q_3}
\\
&c_4(a+b)^{q_4}
=
c_4a^{q_4}+c_4b^{q_4}
\\
&c_5(a+b)^{q_5}
=
c_5a^{q_5}+c_5b^{q_5}+3a^{q_3}b^{2q_1}+3c_4a^{q_4}b^{q_1}
\\
&c_6(a+b)^{q_6}
=
c_6a^{q_6}+c_6b^{q_6}
-3a^{2q_3}b^{q_1}
-3c_4a^{q_4}b^{q_3}. \end{aligned} \right. $$ By Lemma~\ref{lemma0}.(5), we get $q_4=2q_1$, $q_3=q_1$, $c_4=1$, $c_5=1$ and $q_5=3q_1$. Hence, if $c_6=0$, the compatibility with $T_H$ yields $m_1=2mq_1$, $m_2=3mq_1$. If $c_6\neq 0$, by the last equation of the above system, $q_6$ has to be a power of $p=2$, however, the compatibility with $T_H$ implies $q_6=3q_1$, a contradiction. \medbreak \noindent{\textbf{Cases 4,5:}} Assume now that $c_1,c_4\neq 0$ and $c_2=c_3=0$. Then $q_1$ and $q_4$ are $p$-powers. With Lemma~\ref{cr_is_one} we may assume that $c_1=c_4=1$ and System \eqref{eq:add_sys_g2} is $$ \left\{ \begin{aligned}
&(a+b)^{q_1}
=
a^{q_1}+b^{q_1}
\\
&(a+b)^{q_4}
=
a^{q_4}+b^{q_4}
\\
&c_5(a+b)^{q_5}
=
c_5a^{q_5}+c_5b^{q_5}+3a^{q_4}b^{q_1}
\\
&c_6(a+b)^{q_6}
=
c_6a^{q_6}+c_6b^{q_6}. \end{aligned} \right. $$ First assume that $p\neq 3$. Then by Lemma~\ref{lemma0}.(1), $p\neq 2$, $c_5=3/2$, $q_4=q_1$, $q_5=2q_1$ and $c_6$ is arbitrary. If $c_6\neq 0$, we get $m_1=m_2=mq_1$ and $q_6=q_1$ from the compatibility with $T_H$, while for $c_6=0$ we have $m_1=m_2=mq_1$.
Next consider the case where $p=3$. Then $q_5$ and $q_6$ are powers of $3$ if $c_5\neq 0$ and $c_6\neq 0$ respectively. Using an exceptional isogeny $\phi$, we see that the cases $(c_5=0,c_6\neq 0)$ and $(c_5=0,c_6=0)$ follow from Case 19 and Case 21 respectively. Indeed, case $(c_5=0,c_6=0)$ is easily checked while for $(c_5=0,c_6\neq 0)$, using the compatibility with $T_H$ and Lemma~\ref{lemmap}, it follows that $q_1=q_4=q_6$. For $(c_5\neq 0,c_6=0)$ and $(c_5\ne 0,c_6\neq 0)$, the compatibility with the torus gives $q_5=q_1+q_4$. Hence $q_5=2q_1=2q_4$, a contradiction with $p=3$. \medbreak \noindent{\textbf{Cases 6--9:}} Let $c_1\neq 0$ and $c_2=c_3=c_4=0$. Assume first that $c_5\neq 0$. By Lemma~\ref{cr_is_one}, we may assume that $c_1=c_5=1$. Considering System \eqref{eq:add_sys_g2}, we see that $q_1$ and $q_5$ are powers of $p$. If $c_6=0$, the compatibility with $T_H$ gives $m_1=m(q_5-q_1)$, $m_2=m(2q_5-3q_1)$.
Assume now that $c_6\neq 0$. Then $q_6$ is also a power of $p$ and from the compatibility with $T_H$ we obtain $$ m_1=\frac{m}{2}(q_1+q_6)=\frac{m}{3}(q_5+q_6) ,\quad m_2=mq_6 \quad\text{and}\quad 2q_5=3q_1+q_6. $$ If $q_1<q_6$ then $q_5=q_1\frac{3+p^{f_6-f_1}}{2}$. By Lemma~\ref{lemmap} we have $q_6=pq_1$ and hence $2p^{f_5-f_1}=3+p$. Thus $p\neq 2$ and $p^{f_5-f_1}=\frac{3+p}{2}\leq p$. Since $q_5>q_1$, i.e. $f_5-f_1\geq 1$, we must have $p=3$ and $f_5=f_1+1$. We obtain the case $q_5=q_6=3q_1$, $p=3$ and $m_1=2mq_1$, $m_2=3mq_1$. If $q_1>q_6$ it follows from Lemma~\ref{lemmap} that there is no solution. If $q_1=q_6$ then $q_5=2q_1=2q_6$. Thus $p=2$ and $m_1=m_2=mq_1$.
Now let $c_5=0$. We may assume that $c_6=1$ since we exclude root groups. In this case $m_1=m(q_1+q_6)/2$, $m_2=mq_6$. \medbreak \noindent{\textbf{Cases 10--13:}} Consider System \eqref{eq:add_sys_g2} with $c_1=0$ and $c_2,c_3\neq 0$. Then $q_2,q_3$ are $p$-powers and by Lemma~\ref{cr_is_one} we may assume that $c_2=c_3=1$. Suppose first that $c_4\neq 0$ and $c_5\neq 0$. If $p=3$, by Lemma~\ref{lemma0}.(1), $q_5=q_2$, $q_6=2q_2$ and $c_6=c_5/2$. In particular $c_6\neq 0$. From the compatibility with $T_H$ we get $m_1=mq_2$, $m_2=2mq_2$ and $q_4=q_3=q_2$. If $p\neq 3$, by Lemma~\ref{lemma0}.(6), we have three cases:
(I) $q_3=q_2$ and $3c_4=c_5$. Then $q_2=q_3$, $q_4=q_5$, $q_6$ is a power of $p$ and $c_6$ is arbitrary. If $c_6\neq 0$, from the compatibility with $T_H$, we get $m_1=mq_2$, $m_2=2mq_2$, $q_2=q_4$ and $q_6=2q_2$ and hence $p=2$ since $q_6$ is also a power of $p$.
(II) $q_3=q_2$ and $3c_4\neq c_5$. Then $p\geq 5$, $q_2=q_3=q_4=q_5$ and $q_6=2q_2$ so $c_6=(c_5-3c_4)/2\neq 0$. From the compatibility with $T_H$ we get $q_6=2q_2$, $m_1=mq_2$, $m_2=2mq_2$.
(III) $q_3\neq q_2$. With Lemma~\ref{lemma0} we have $q_5=q_3\neq q_2=q_4$ and the compatibility with $T_H$ gives $q_3=q_2$, a contradiction.
\medbreak \noindent{\textbf{Case 14:}} Continuing the discussion with $c_1=0$ and $c_2,c_3\neq 0$, let $c_4\neq 0$ and $c_5=0$. First assume that $p=3$. If $c_6\neq0$, by System \eqref{eq:add_sys_g2}, $q_6$ is a power of $p$. Then, from the compatibility with $T_H$ we get $m_1=mq_4$, $m_2=m(q_3+q_4)$ and $q_6=q_3+q_4=(q_2+3q_4)/2$. Thus $2\mid q_6=q_3+q_4$, but $p=3$, a contradiction. Therefore $c_6=0$ and from the compatibility with $T_H$ we get $m_1=mq_4$, $m_2=m(q_3+q_4)$ and $2q_3=q_2+q_4$. By Lemma~\ref{lemmap}, we cannot have $q_2$ distinct from $q_4$ since $(3^f+1)/2$ is not a power of $3$ for $f\geq1$. Hence $q_2=q_4=q_3$. One checks that $U_H$ is mapped by an exceptional isogeny to a corresponding group described in Case 8.
Now let $p\neq 3$. Applying Lemma~\ref{lemma0}.(1) we have $q_4=q_3$, $q_6=2q_3$, $p\neq 2$ and $c_6=-\frac{3}{2}c_4$. From the compatibility with $T_H$ we obtain $m_1=mq_3$, $m_2=2mq_3$, $q_2=q_3$. \medbreak \noindent{\textbf{Cases 15,16:}} Let $c_1=c_4=0$ and $c_2,c_3\neq 0$, so that as usual $q_2,q_3$ are $p$-powers. By Lemma~\ref{cr_is_one}, we may assume that $c_2=c_3=1$. If $c_5\neq 0$, by Lemma~\ref{lemma0}.(1), $p\neq 2$, $q_5=q_2$, $q_6=2q_2$ and $c_6=c_5/2$. In particular $c_6\neq 0$. The compatibility with $T_H$ shows that also $q_2=q_3$. In this case $U_H$ is conjugate by a representative in $N_G(T)$ of $s_{\alpha_1}$ to $U_H$ from Case 17.
If $c_5=c_6=0$ then the corresponding group $U_H$ is conjugate by an element of $N_G(T)$ to the group $U_H$ in Case 7.
Finally, suppose $c_5=0$ and $c_6\neq 0$, so that $q_6$ is also a power of $p$. From the compatibility with $T_H$ we get $m_1=m(q_6-q_3)$, $m_2=mq_6$, $3q_3=q_2+q_6$. Thus $p=2$, $q_2\ne q_6$, and either $q_6=2q_2=2q_3$ or $q_2=2q_6=2q_3$. To see this, let $q_i=2^{f_i}$. If $q_2>q_6$ then $3q_3=q_6(1+2^{f_2-f_6})$. By Lemma~\ref{lemmap}, the only solution is $f_2-f_6=1$ so $q_2=2q_6$, hence $q_3=q_6$. We get $m_1=0$ and $m_2=mq_3$. If $q_2<q_6$ then $3q_3=q_2(2^{f_6-f_2}+1)$ and the argument is similar: $q_6=2q_2$, hence $q_3=q_2$. We get $m_1=mq_2$ and $m_2=2mq_2$.
\medbreak \noindent{\textbf{Cases 17,18,19:}} Consider System~\eqref{eq:add_sys_g2} with $c_1,c_3=0$ and $c_2,c_4\neq 0$, so that $q_2$ and $q_4$ are $p$-powers. By Lemma~\ref{cr_is_one} we may assume that $c_2=c_4=1$. If $c_5\neq 0$, by Lemma~\ref{lemma0}.(1), $q_5=q_2$, $q_6=2q_2$ (so $p\neq 2$) and $c_6=c_5/2$. From the compatibility with $T_H$ we get $m_1=mq_2$, $m_2=2mq_2$ and $q_4=q_2$.
If $c_5, c_6=0$ then the corresponding group $U_H$ is conjugate by an element of $N_G(T)$ to the group $U_H$ in Case 9. If $c_5=0$ and $c_6\neq 0$, then $q_6$ is a $p$-power, and from the compatibility with $T_H$ we get $m_1=mq_4$, $m_2=mq_6$ and $q_2=2q_6-3q_4$. If $q_2=q_4$ then $2q_6=4q_2=4q_4$ hence $p=2$, $q_6=2q_2=2q_4$, $m_1=mq_4$, $m_2=2mq_4$. If $q_4>q_2$ then $2q_6=q_2(1+3p^{f_4-f_2})$ and by Lemma~\ref{lemmap} this equation has no solution. If $q_2>q_4$ then $2q_6=q_4(p^{f_2-f_4}+3)$. By Lemma~\ref{lemmap} the only solution is $f_2-f_4=1$. Then $p=3$ and $q_2=q_6=3q_4$. \medbreak \noindent{\textbf{Cases 20,21:}} If $c_1,c_3,c_4=0$ and $c_2,c_5\neq 0$, then $q_2$ and $q_5$ are $p$-powers, and by Lemma~\ref{cr_is_one} we may assume that $c_2,c_5=1$. Again, by Lemma~\ref{lemma0}.(1), $q_5=q_2$, $q_6=2q_2$ (so $p\neq 2$) and $c_6=1/2$. From the compatibility with $T_H$ we get $m_1=mq_2$, $m_2=2mq_2$.
If $c_1,c_3,c_4,c_5=0$ and $c_2,c_6\neq 0$, by Lemma~\ref{cr_is_one}, we may assume that $c_2,c_6=1$. From the compatibility with $T_H$ we get $m_1=m(2q_6-q_2)/3$, $m_2=mq_6$. \end{proof}
\begin{prop}
\label{prop:non-existence}
Let $G$ be a simple algebraic group of rank $2$ defined over an algebraically closed field of characteristic $p>0$. If $H$ is a closed subgroup of $G$, with $\dim H\leq 2$, then $H$ is not epimorphic in $G$.
\end{prop} \begin{proof}
By \S\ref{subsec:criteria}.(7), Corollary~\ref{cor:up_to_isogeny}.(1) and Lemma~\ref{lem:mixed_2_dim}, we may assume that $G$ is simply connected and $H=H^\circ$, with $H=U_H\rtimes T_H$, non-abelian, with $U_H$ $1$-dimensional unipotent and $T_H$ a torus. By Lemma~\ref{lem:root_group_case} and Lemma~\ref{lem:structure} we may assume that $H$ is one of the groups described in Tables~\ref{tab:A2_cases},~\ref{tab:B2_cases} and~\ref{tab:G2_cases}. The proof is by considering in turn each of the cases in these tables.
For a $G$-module $V$, an element $v\in V$ and a nonzero integer $a$, we denote by $\overline{\otimes^a v}$ the image of the $a$-fold tensor product $\otimes^a v$ in the symmetric power $S^a(V)$. Moreover, we write $f_{\alpha}$ for the Chevalley basis vector of the Lie algebra of $G$ corresponding to the negative root $\alpha\in\Phi$.
\smallbreak
\noindent{\textbf{Cases for $\operatorname{SL}_3(k)$:}}
We fix an ordered basis $(e_1,e_2,e_3)$ of the natural $G$-module $V$, so that the matrices of $u_{\alpha_1}(x)$, $u_{\alpha_2}(x)$ and $u_{\alpha_1+\alpha_2}(x)$, are $I+xE_{12}$, $I+xE_{23}$ and $I+xE_{13}$ respectively.
\smallbreak
\noindent{\bf{Case 1:}} Here it is straightforward to check that $H$ is conjugate to the image of a Borel subgroup of ${\rm SL}_2(k)$ under the two-dimensional
irreducible representation of highest weight $2q_1$. Then by Corollary~\ref{cor:reductive_proper},
$H$ is not epimorphic.
\smallbreak
\noindent{\bf{Case 2:}} Here the torus $T_H$ acts with weights $\frac{m}{3}(q_1+q_3)$, respectively $\frac{m}{3}(q_3-2q_1)$, $\frac{m}{3}(q_1-2q_3)$ on the given basis vectors. If $q_1=q_3$, then $e_1\otimes (e_2-e_3)\otimes (e_2-e_3)\in \otimes^3(V)$ is fixed by $U_H$ and of zero weight for $T_H$ but is not a weight vector for $T$.
If $q_1>q_3$, so that $q_1\geq 2q_3$, we first suppose $q_1=2q_3$, in which case, $e_1\wedge e_2\in\wedge^2(V)$ is fixed by $H$, but not by $T$. Next suppose $q_1>2q_3$. Set $a = q_1-2q_3$ and $b = 2q_1-q_3$, both strictly positive integers. Then set $Y = \wedge^2(V)$, and note that $e_1\wedge e_2\in Y$ is fixed by $U_H$ and has $T_H$-weight $-\frac{m}{3}a$, and $(e_1\wedge e_3)\in Y$ is fixed by $U_H$ and has $T_H$-weight $\frac{m}{3}b$. Thus the vector $w=\overline{\otimes^a(e_1\wedge e_3)}\otimes \overline{\otimes ^b(e_1\wedge e_2)}$ in $S^a(Y)\otimes S^b(Y)$ is fixed by $H$, but is not fixed by $T$. Indeed, the cocharacter $\alpha_1^{\vee}$ has strictly positive weight on $w$. The case $q_3>q_1$ is entirely similar.
So in all cases, by Theorem~\ref{thm:epi_characterization}.(4), $H$ is not epimorphic in $G$.
\medbreak
\noindent{\textbf{Cases for $\operatorname{Sp}_4(k)$:}}
We fix as follows ordered bases of $V_1$ and $V_2$, the Weyl modules for $G$ of highest weight $\omega_i$, for $i = 1, 2$:
\begin{flalign*}
V_2:\, & v_{21},\, v_{22}=f_{\alpha_2}v_{21},\, v_{23}= f_{\alpha_1+\alpha_2}v_{21},\, v_{24}=f_{\alpha_1+2\alpha_2}v_{21};
&&\\
V_1:\, & v_{11},\, v_{12}=f_{\alpha_1}v_{11},\, v_{13}= f_{\alpha_1+\alpha_2}v_{11},\, v_{14}=f_{\alpha_1+2\alpha_2}v_{11},\,v_{15}=f_{\alpha_1}f_{\alpha_1+2\alpha_2}v_{11}.
\end{flalign*} (Here $v_{11}$ and $v_{21}$ are highest weight vectors in the respective modules.) We use the matrix expressions of the root groups with respect to these bases.
\FloatBarrier \renewcommand{1.3}{1.5} \begin{table}[ht]
\begin{tabular}{| c | l | c |}
\hline
Cases& $w$ & $W$ \\
\hline
3 & $c_4(v_{22}\wedge v_{23}) - v_{22}\wedge v_{24}$ & $\wedge^2(V_2)$ \\
\hline
4 & $(q_1=2q_3)$ $v_{21}$& $V_2$\\
& ($q_1=q_3$) $v_{14}$& $V_1$\\
& $(q_1>2q_3)$ $\overline{\otimes^{2q_1-2q_3}v_{21}}\otimes \overline{\otimes^{q_1-2q_3}v_{14}}$& $S^{2q_1-2q_3}(V_2)\otimes S^{q_1-2q_3}(V_1)$\\
& $(q_1<q_3$) $\overline{\otimes^{2q_3-2q_1}v_{21}}\otimes \overline{\otimes^{2q_3-q_1}v_{14}}$& $S^{2q_3-2q_1}(V_2)\otimes S^{2q_3-q_1}(V_1)$ \\
\hline
5,6 & $v_{22}-v_{23}$ & $V_2$ \\
\hline
\end{tabular}
\caption{}
\label{tab:B2_fixed_point} \end{table} \FloatBarrier
\medbreak \noindent{\bf{Case 1:}} As in Case 1 for ${\rm SL}_3(k)$, one can check that the group $H$ is conjugate to the image of a Borel subgroup of ${\rm SL}_2(k)$ acting on the irreducible module of highest weight $3q_1$, and conclude by applying Corollary~\ref{cor:reductive_proper}. \medbreak \noindent{\bf{Case 2:}} Here $H$ lies in the maximal rank reductive subgroup of type $A_1\tilde{A_1}$, and we apply again Corollary~\ref{cor:reductive_proper}. \medbreak \noindent{\bf{Cases 3,4,5,6:}} In each case we identify a module $W$, a fixed point $w\in W$ for $H$ which is not fixed by $T$ and then apply Theorem~\ref{thm:epi_characterization}.(4) to see that $H$ is not epimorphic in $G$. The information is given in Table~\ref{tab:B2_fixed_point}. For Case 4, notice that $\alpha_1^{\vee}$ or $\alpha_2^{\vee}$ has strictly positive weight on $w$. \medbreak \noindent{\textbf{Cases for $G_2(k)$:}} Here we use the explicit matrix representations of root elements acting on $V$, the $7$-dimensional Weyl module of highest weight $\omega_1$. We use the structure constants from \textsc{GAP} \cite{GAP}, the same as those used to produce the expressions in Table~\ref{tab:G2_cases}, and fix the following ordered basis $\mathcal B$ of $V$: $$ \begin{aligned}
&
v_1,\quad
v_2 = f_{\alpha_1}v_1,\quad
v_3 = f_{\alpha_1+\alpha_2}v_1,\quad
v_4=f_{2\alpha_1+\alpha_2}v_1,\quad
\\
&
v_5=f_{3\alpha_1+\alpha_2}v_1
,\quad
v_6=f_{3\alpha_1+2\alpha_2}v_1
,\quad
v_7=f_{\alpha_1}f_{3\alpha_1+2\alpha_2}v_1. \end{aligned} $$ where $v_1$ is a maximal vector for the fixed Borel subgroup of $G$. Then the root groups are as follows: $$ \begin{aligned}
&u_{\alpha_1}(x) = I+ x(E_{12}+2E_{34}+E_{45}+E_{67})+x^2E_{35},\\
&u_{\alpha_2}(x) = I+x(-E_{23}+E_{56}),\\
&u_{\alpha_1+\alpha_2}(x) = I+ x(E_{13}-2E_{24}-E_{46}+E_{57})+x^2E_{26},\\
&u_{2\alpha_1+\alpha_2}(x) = I+ x(2E_{14}-E_{25}+E_{36}-E_{47})-x^2E_{17},\\
&u_{3\alpha_1+\alpha_2}(x) = I+x(E_{15}+E_{37}),\\
&u_{3\alpha_1+2\alpha_2}(x) = I+x(E_{16}+E_{27}). \end{aligned} $$
\medbreak \noindent{\bf{Case 1:}} We argue that $T_HU_H$ lies in a principal $A_1$-subgroup of $G$, and then apply Corollary~\ref{cor:reductive_proper}. We first show that $U_H$ lies in such a subgroup. The argument is similar to the proof of \cite[Theorem 3]{Testerman1}. It is known that the principal $A_1$-subgroup in $G_2$ acts irreducibly on the module $V$, when $p\geq 7$, and $V$ is isomorphic to the irreducible ${\rm SL}_2(k)$-module of highest weight $6$. Using the construction of the irreducible representation coming from the action of ${\rm SL}_2(k)$ on the homogeneous polynomials of degree $6$ in $k[X,Y]$, with the basis $w_i = X^{6-i}Y^i$, $0\leq i\leq 6$, one can check that the action of $u(t)$ on the basis $\{\gamma_iw_i, 1\leq i\leq 7\}$, where we take $(\gamma_1,\gamma_2,\gamma_3,\gamma_4,\gamma_5,\gamma_6,\gamma_7) = (1,1,-2,-3,-12,-60,-360)$, is precisely the action of $\begin{pmatrix}1&t^{q_1}\cr 0&1\end{pmatrix}$ on the basis $\{v_i,1\leq i\leq 7\}$.
Moreover, since the action of the toral element
$t(\lambda) = \alpha_1^\vee(\lambda^{3q_1m})\alpha_2^\vee(\lambda^{5q_1m})$
on the basis $\gamma_iw_i$ is $\lambda^{q_1m(4-i)}$, for $1\leq i\leq 7$,
and setting $\mu^2 = \lambda^m$, we have that
the action of the semisimple element $t(\mu)$
is precisely that of
${\rm diag}(\mu^{q_1},\mu^{q_1})$
on the basis $\mathcal B$.
\medbreak
\noindent{\bf{Cases 9,20,21:}}
Here $H$ lies in a maximal rank subsystem subgroup
of type $A_1A_1$, $A_2$ and $A_2$ respectively.
By Corollary~\ref{cor:reductive_proper},
$H$ is not epimorphic.
\smallbreak
\FloatBarrier \renewcommand{1.3}{1.3} \begin{table}[ht]
\begin{tabular}{| c | l |}
\hline
Cases & weights for $m=1$ (here $q=q_1=q_2=q_3=q_4$)\\
\hline
2,3,8 & $2q,q,q,0,-q,-q,-2q$ \\
\hline
4,5,6 & $q,0,q,0,-q,0,-q$ \\
\hline
7 & $q_5-q_1,q_5-2q_1,q_1,0,-q_1,2q_1-q_5,q_1-q_5$ \\
\hline
10--15,17,18 & $q,q,0,0,0,-q,-q$ \\
\hline
16 & $0,q,-q,0,q,-q,0$\\
\hline
19 & $q,2q,-q,0,q,-2q,-q$\\
\hline
\end{tabular}
\caption{}
\label{tab:G2_weights} \end{table} \FloatBarrier
\FloatBarrier \renewcommand{1.3}{1.3} \begin{table}[ht]
\begin{tabular}{| c | l | c |}
\hline
Cases& $w$ & $W$ \\
\hline
2,3 & $(v_2-v_3)\wedge v_4\wedge v_6$ & $\wedge^3(V)$ \\
\hline
4,5 & $2(c_6-1)v_2+v_4-2v_6$ & $V$ \\
\hline
6 & $c_6v_2-v_6$ & $V$\\
\hline
7 & $(q_1>q_5)$ $\overline{\otimes^{2q_1-q_5}v_1}\otimes \overline{\otimes^{q_1-q_5} v_6}$ & $S^{2q_1-q_5}(V)\otimes S^{q_1-q_5}(V)$\\
& $(q_1=q_5)$ $v_1$ & $V$\\
& $(q_5=2q_1)$ $v_6$&$V$\\
& $(q_1<q_5)$ $\overline{\otimes^{q_5-2q_1}v_1}\otimes \overline{\otimes^{q_5-q_1} v_6}$ & $S^{q_5-2q_1}(V)\otimes S^{q_5-q_1}(V)$\\
\hline
8&$(c_6v_5-v_6)\wedge v_3\wedge v_4$&$\wedge^3(V)$\\
\hline
10 & $(2c_5-2c_4^2)v_3+(c_4-c_5)v_4+(2c_4-2)v_5$ & $V$\\
\hline
11 & $v_5-c_4v_3$ & $V$\\
\hline
12 & $(p=2)$ $v_5+c_4v_3$ & $V$\\
& $(p>2)$ $c_4(c_4-3)v_3+c_4v_4+(1-c_4)v_5$ & $V$\\
\hline
13 & $(2c_4^2-2c_5)v_3+(c_5-c_4)v_4+(2-2c_4)v_5$ & $V$\\
\hline
14 & $-c_4^2v_3+c_4v_4+(2c_4-2)v_5$ & $V$\\
\hline
15 & $v_5$ & $V$\\
\hline
16 & $v_1$ & $V$\\
\hline
17 & $2v_3+c_5v_4-2v_5$ & $V$\\
\hline
18 & $v_3+v_5$ & $V$\\
\hline
19 & $(c_6v_3+v_7)\wedge v_4\wedge v_1$ & $\wedge^3(V)$\\
\hline
\end{tabular} \caption{}
\label{tab:G2_fixed_point} \end{table} \FloatBarrier
For the remaining cases,
we indicate in Table~\ref{tab:G2_weights} the weights of $T_H$ on the vectors in $\mathcal B$
and in Table~\ref{tab:G2_fixed_point} we give a vector which is fixed by $H$ but not by $T$.
We give the details only for Case 2 and Case 7 to illustrate the most involved cases.
\medbreak \noindent {\bf{Case 2:}} Set $q = q_1$. Then $t(\lambda)$ acts on the basis $\mathcal B$ as $$ {\rm diag}(\lambda^{2qm},\lambda^{qm},\lambda^{qm},\lambda^0,\lambda^{-qm},\lambda^{-qm},\lambda^{-2qm}). $$ We calculate that $u(x)(v_2-v_3) = v_2-v_3$, $u(x)v_4 = v_4+2x^q(v_3-v_2)$ and $u(x)v_6 = v_6-x^qv_4+x^{2q}(v_2-v_3)$. Now we consider the $G$-module $\wedge^3(V)$, where $(v_2-v_3)\wedge v_4\wedge v_6$ is fixed by $U_H$ and has weight $0$ for $T_H$. Moreover, this vector is not a weight vector for $T$. Hence $T_HU_H$ is not epimorphic in $G$. \medbreak \noindent{\bf{Case 7:}} Here the argument is slightly more complicated as we have two different $p$-powers, $q_1$ and $q_5$ and the argument differs according to their relative sizes. Note first that $t(\lambda)$ acts on the ordered basis $\mathcal B$ as $${\rm diag}(\lambda^{(q_5-q_1)m},\lambda^{(q_5-2q_1)m},\lambda^{q_1m},\lambda^0,\lambda^{-q_1m},\lambda^{(2q_1-q_5)m}, \lambda^{(q_1-q_5)m}).$$
Consider first the case where $q_1>q_5$. Here we use the $G$-module $W = S^a(V)\otimes S^b(V)$, where $a = 2q_1-q_5$ and $b = q_1-q_5$. The vector $\overline{\otimes^{a}v_1}\in S^a(W)$ is fixed by $U_H$ and has $T_H$ weight $a(q_5-q_1)m$, while the vector $\overline{\otimes^{b}v_6}\in S^b(W)$ is fixed by $U_H$ and has $T_H$ weight $b(2q_1-q_5)$. Thus in $W$, we have the $U_H$-fixed vector $\overline{\otimes^{a}v_1}\otimes\overline{\otimes^{b}v_6}$ of $T_H$-weight $0$, but which is not fixed by $\alpha_1^{\vee}(k^{\times})\subset T$.
For the case where $q_1=q_5$ it is easy to see that $v_1$ is of weight $0$ for $T_H$ and fixed by $U_H$. It is also easy to rule out the case where $q_5=2q_1$ since here the vector $v_6$ is of $T_H$ weight $0$ and is fixed by $U_H$.
Finally, we must consider the case where $q_5>2q_1$. Now we set $W = S^a(V)\otimes S^b(V)$, where $a = q_5-2q_1$ and $b = q_5-q_1$, and we argue as above using $\overline{\otimes^{a}v_1}\otimes\overline{\otimes^{b}v_6}$.
\end{proof}
\section{Existence of $3$-dimensional closed epimorphic subgroups} \label{sec:existence}
In this section, we exhibit a $3$-dimensional epimorphic subgroup $H$ in each of the groups $\operatorname{SL}_3(k)$, $\operatorname{Sp}_4(k)$ and $G_2(k)$, defined over the field $k$, with $\operatorname{char}(k)=p>0$. For $\operatorname{SL}_3(k)$ we may choose $H$ to be the radical of a maximal parabolic subgroup by \S\ref{subsec:criteria}.(8). For the other two groups fix $q\neq 1$ a $p$-power.
Let $G = {\rm Sp}_4(k)$. Let $A = \langle u_{+}(x), u_{-}(x)\ |\ x\in k\rangle$ be an $A_1$-type group,
diagonally embedded in the maximal rank subgroup $\langle U_{\pm\alpha_1}\rangle\times \langle U_{\pm(\alpha_1+2\alpha_2)}\rangle\subseteq {\rm Sp}_4(k)$, via the morphisms $$ u_{+}(x)=u_{\alpha_1}(x)u_{\alpha_1+2\alpha_2}(x^{q}) \quad\text{and}\quad u_{-}(x)=u_{-\alpha_1}(x)u_{-\alpha_1-2\alpha_2}(x^{q}). $$
Then $A$ has a maximal torus given by $T_A = \{h(\lambda) = \alpha_1^{\vee}(\lambda^{1+q})\alpha_2^{\vee}(\lambda^{2q})\ |\ \lambda\in k^{\times}\}$, so that $\alpha_1(h(\lambda)) = \lambda^{2}$ and $\alpha_2(h(\lambda)) = \lambda^{q-1}$.
Let $X = \{u_{+}(x)\ |\ x\in k\}$ and consider the subgroup $H = B_AU_{\alpha_1+\alpha_2}$, where $B_A$ is the Borel subgroup $T_{A}X$ of $A$. The root group $U_{\alpha_1+\alpha_2}$ is normalized by $B_A$, indeed centralised by
$X$ and $T_A$ acts with weight $q+1$. We claim that $Y=\langle A, U_{\alpha_1+\alpha_2}\rangle = G$, which, by \S\ref{subsec:criteria}.(9), shows that $H$ is epimorphic in $G$.
Notice that $Y$ acts irreducibly on the $4$-dimensional Weyl module $V$ of highest weight $\omega_2$.
Indeed, the only $A$-invariant subspaces of $V$ are the non-isomorphic irreducible $A$-modules
$V_{\omega_2}+V_{\omega_2-\alpha_1-2\alpha_2}$ and $V_{\omega_2-\alpha_2}+V_{\omega_2-\alpha_1-\alpha_2}$.
Since $U_{\alpha_1+\alpha_2}$ preserves neither of these subspaces,
$Y$ acts irreducibly on $V$.
Now, using \cite[Theorem 3]{Seitz2}, we have the following possibilities for $Y$: \begin{itemize} \item $Y=G$, or \item $Y$ preserves a nontrivial decomposition $V = V_1\otimes V_2$ and $Y$ is a subgroup of ${\rm Sp}(V_1)\cdot {\rm SO}(V_2)$, or \item $p=2$ and $Y={\rm SO}(V)$, or \item $Y$ is simple. \end{itemize}
Since $\dim Y\geq 4$ and $Y$ has an irreducible $4$-dimensional representation, if $Y$ is simple then $Y=G$. Next we argue that in fact $\dim Y\geq 7$ and thus the second and third possibilities are ruled out and we have $Y = G$ as desired. We have ${\rm Lie}(Y)\supseteq {\rm Lie}(A) + {\rm Lie}(U_{\alpha_1+\alpha_2})$. Let $A$ act on the root vector $e_{\alpha_1+\alpha_2}$, which is a maximal vector of an $A$-composition factor of ${\rm Lie}(G)$. The weight of this vector being $q+1$, we see that there is a $4$-dimensional irreducible composition factor, and we now have 7 distinct weights in the action of $A$ on ${\rm Lie}(Y)$, establishing the claim.
Now turn to the case of $G=G_2(k)$. Here again, we set $A =\langle u_{+}(x), u_{-}(x)\ |\ x\in k\rangle$ to be an $A_1$-subgroup diagonally embedded in the maximal rank subgroup $\langle U_{\pm\alpha_1}\rangle\times \langle U_{\pm(3\alpha_1+2\alpha_2)}\rangle\subseteq G_2(k)$, via the morphisms $$ u_{+}(x)=u_{\alpha_1}(x)u_{3\alpha_1+2\alpha_2}(x^{q}) \quad\text{and}\quad u_{-}(x)=u_{-\alpha_1}(x)u_{-3\alpha_1-2\alpha_2}(x^{q}). $$ Arguing as above, we see that $B_A$ normalises the root group $U_{3\alpha_1+\alpha_2}$ and we set $H = B_AU_{3\alpha_1+\alpha_2}$ and $Y = \langle A,U_{3\alpha_1+\alpha_2}\rangle$. We will show that $Y$ acts irreducibly on the Weyl module $V$ of highest weight $\omega_1$. Then noting that $\dim Y\geq 4$, the main theorem of \cite{Testerman1} shows that either $Y=G$, or $Y$ is non-simple semisimple, or $p=3$ and $Y$ lies in a short root subsystem subgroup of type $A_2$. The latter is not possible as $Y\supset U_{3\alpha_1+\alpha_2}$. The second case is also not possible: there is no non-simple semisimple algebraic group with a $7$-dimensional faithful irreducible representation and the only such group with a $6$-dimensional faithful irreducible representation when $p=2$ is of type $A_2A_1$ which cannot appear as a subgroup of $G_2(k)$. Thus we conclude that $Y=G$ and we complete the argument by applying \S\ref{subsec:criteria}.(9).
To see that $Y$ acts irreducibly, we first consider the action of $A$ on $V$. Here we have two non-isomorphic summands $V_{\omega_1}+V_{\omega_1-\alpha_1}+V_{\omega_1-3\alpha_1-2\alpha_2}+V_{\omega_1-4\alpha_1-2\alpha_2}$ and $V_{\omega_1-\alpha_1-\alpha_2}+V_{\omega_1-2\alpha_1-\alpha_2}+V_{\omega_1-3\alpha_1-\alpha_2}$. (Note that if $p=2$, we have $V_{\omega_1-2\alpha_1-\alpha_2}=0$.) Moreover, $U_{3\alpha_1+\alpha_2}$ stabilizes neither of the two subspaces.
Hence $Y$ acts irreducibly on $V$ as claimed.
\end{document} | arXiv |
Volume 3 (2018): Edizione 3 (August 2018)
Journal of Data and Information Science
Minimum Representative Size in Comparing Research Performance of Universities: the Case of Medicine Faculties in Romania
Xiaoling Liu,
Mihai Păunescu,
Viorel Proteasa e
Jinshan Wu
Pubblicato online: 27 Sep 2018
Volume & Edizione: Volume 3 (2018) - Edizione 3 (August 2018)
Pagine: 32 - 42
Ricevuto: 27 Jun 2018
Accettato: 27 Aug 2018
DOI: https://doi.org/10.2478/jdis-2018-0013 © 2018 Xiaoling Liu, Mihai Păunescu, Viorel Proteasa, Jinshan Wu, published by SciendoThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
Figure e tabelle
The evaluation of departments or universities has become common place for nowadays academia. Different indicators are used for such purposes, while research is emphasized in many cases (Hazelkorn, 2011). Scientific productivity can be measured using different indicators, such as the number of published articles, the number of received citations, h-index (Egghe, 2008; Hirsch, 2005; Molinari & Molinari, 2008), g-index (Egghe, 2006; Tol, 2008). Whole departments, higher education institutions or even cities and countries are assessed in these terms. In practice, often data are aggregated and compared using the mean value or the total value of the aggregated sets. For instance, Academic Ranking of World Universities (ARWU, 2018), QS World University Rankings (QS, 2016), and CWTS Leiden Ranking (CWTS, 2018) all use either the mean or total value type of a specific indicator. Such approaches can be criticized for reliability, as the arithmetic mean may not be an appropriate gauge for the performance of a collective set, when the sample values in the set display a highly skewed distribution. When the distribution of the data set deviates significantly from a normal function the samples are no longer within a narrow region centered at the arithmetic mean. Often for a set of papers in a journal, a set of researchers in a university, it is often the case that the distribution is skewed (Lotka, 1926; Seglen, 1992). In such a case, the high variation within the populations to be compared obscures the inter-population variation, thus rendering the comparison often unreliable.
In Shen et al. (2017), we asked when the arithmetic mean of a sample set, or some other kind of average score, be used to represent the set, and under which conditions a comparison based on such measures of central tendency can be reliable? When does the comparison of arithmetic means indicate a grouping of academics, such as a department or a university, performs better than the one it is compared to? In order to answer this question, we proposed a definition of the minimum representative size κ as a parameter which characterizes a pair of data sets which are to be compared. The method is described in details, including the analytical demonstration, in Shen et al. (2017). In a nut-shell, κ represents the size of the smallest sample of randomly extracted points within the data set whose variance of averages is less than or equal to the variance between the data sets to be compared.
Given, for example, two sets of data, (1) and (2), whose averages are in an ordinal relation e.g. the average of data set (1) is higher than that of data set (2), what is the probability that a random sample from the first data set has a higher average than a random sample from the second? If the two data sets are skewed, the probability is often not high. For the very purpose of increasing this probability when it is possible to increase it, we introduced in Shen et al. (2017) the definition of κ and instead of drawing one random sample, we turn to draw κi random samples from the corresponding set i and compare the average of those κ1 and κ2 samples. The value of κ depends on the variance of the data set: the smaller the variance, the smaller κ. If κ is comparable to the size of the data set, or even larger—and this happens when the set has very large variance compared to other sets, then the average is totally an unreliable indicator for comparing the two data sets. Therefore, the minimum representative size κ serves as an indicator of consistence of the set, which can be used as a supplementary indicator to be computed before using the average as a basis for comparing two data sets with a skewed distribution which deviates significantly from normality.
An alternative approach is to truncate the distribution and to consider a particular segment when comparing a set of populations. That particular segment has the property of smaller variation and thus renders the comparison between populations more reliable. In this case, theoretical arguments for choosing a specific segment as being representative of the entire population are strongly required. We pursued this stream of research in Proteasa et al. (2017).
In this article, we apply the former analytical approach in an empirical context: we calculate the minimum representative size for six medicine departments in Romania in order to allow for reliable comparisons between them as collective units.
Let us denote a skewed data set j, which may be total number of papers, received citations, h-index, g-index of each researchers in a university, as {sj}. Due to the skewness of the distribution of {sj}, the mean Sj=〈sj〉=∑psjp∑p${{S}_{j}}=\left\langle {{s}_{j}} \right\rangle =\frac{{{\sum }_{p}}s_{j}^{p}}{{{\sum }_{p}}}$does not display the properties which are typical for a normal distribution. For example, an academic from a university with such a distribution has a higher probability to have an individual score sj which is at a significant distance from the average of the data set Sj. Such skewed data sets display also higher frequencies of extremely large moments of the distribution function and a higher degree of overlap when compared in pairs. Thus:
Pr(si>sj| Si>Sj)=∫−∞∞dxj(∫xj∞dxiρi(xi)ρj(xj)) <<1,$$\Pr \left( {{s}_{i}}>{{s}_{j}}|\,{{S}_{i}}>{{S}_{j}} \right)=\int_{-\infty }^{\infty }{d{{x}_{j}}}\left( \int_{{{x}_{j}}}^{\infty }{d{{x}_{i}}{{\rho }_{i}}\left( {{x}_{i}} \right){{\rho }_{j}}\left( {{x}_{j}} \right)} \right)\,<<1,$$ (1)
where ρi(xi) is the distribution with mean Si in data set i. This means that the probability of a random sample from the data set with a higher mean to have a higher value than that from the data set with a lower mean can be very low. However, when moving from an individual sample to large enough Ki and Kj samples from data sets i and j respectively, the odds change significantly and the probability that a random sample of Ki researchers from university i has higher mean score than a random sample of Kj random researchers from university j, is high, as indicated in the equation below:
Pr(Gi(Ki)>Gj(Kj)|Si>Sj)>>0≈1,$$\Pr \left( {{G}_{i}}\left( {{K}_{i}} \right)>{{G}_{j}}\left( {{K}_{j}} \right)|{{S}_{i}}>{{S}_{j}} \right)>>0\approx 1,$$ (2)
where, Gi(Ki) is the mean of the Ki samples from the data set i,
Gi(Ki)=1Ki∑p=1Kixip$${{G}_{i}}\left( {{K}_{i}} \right)=\frac{1}{{{K}_{i}}}\sum\limits_{p=1}^{{{K}_{i}}}{x_{i}^{p}}$$ (3)
We also know that the average of G(K), μ(G(K)) is close to S (the mean of data set), and the variance of G(K) decreases with K.
Equation (2) can be solved by bootstrap sampling (Wasserman, 2004) (see Shen et al. (2017) for further details). For data sets i and j with Si > Sj, starting from Ki = 1 and Kj = 1, bootstrap sampling unfolds as follows:
For given values of Ki and Kj, L=4000 sets of random samples are generated from data sets i and j, respectively. Following this, Gil(Ki) and Gjl(Kj)$G_{i}^{l}\left( {{K}_{i}} \right)\,\,\,\,and \,\,\,G_{j}^{l}\left( {{K}_{j}} \right)$are calculated for each l=1, 2,…, L.
Pair matching is performed on Gim(Ki) andGin(Kj)$G_{i}^{m}\left( {{K}_{i}} \right)\,\,\,and G_{i}^{n}\left( {{K}_{j}} \right)$and the percentage of Gim(Ki) > Gjn(Kj)$G_{i}^{m}\left( {{K}_{i}} \right)\,\,\,>\,\,G_{j}^{n}\left( {{K}_{j}} \right)$is calculated. The following conditions are imposed:
{Pr(Gi(Ki)> Gj(Kj)| Si>Sj)≥ 0.9σi2κi=σj2κj,$$\left\{ \begin{align} & \Pr \left( {{G}_{i}}\left( {{K}_{i}} \right)>\,{{G}_{j}}\left( {{K}_{j}} \right)|\,{{S}_{i}}>{{S}_{j}} \right)\ge \,0.9 \\ & \qquad\qquad\qquad\frac{\sigma _{i}^{2}}{{{\kappa }_{i}}}=\frac{\sigma _{j}^{2}}{{{\kappa }_{j}}} \\ \end{align} \right.,$$ (4)
If (4) conditions are satisfied, then the pairs values (κi, κj)=(Ki, Kj) represent what Shen et al. (2017) termed the minimum representative size of the pair of data sets i and j. If (4) conditions are not satisfied, then Ki or Kj are increased and he sequence is repeated starting with step 1.
If κi is less than the size of data set, i.e. κi < Ni and κj < Nj, then we may say the averages Gi(κi) and Gj(κj) can be reliably compared. The smaller κ, the more consistent the distribution within the data set. Values of κi which are greater than Ni indicate that the average is not a reliable measure to describe the central tendency of the data set i.
We increase Ki and Kj according to the ratio between their variances, KiKj=σi2σj2 ,$\frac{{{K}_{i}}}{{{K}_{j}}}=\frac{\sigma _{i}^{2}}{\sigma _{j}^{2}}\,\,,$instead of making Ki = Kj since we believe that the set with larger variance should be responsible to reduce its variance more by using larger sample size.
In this article, we study 3374 academics from the departments of medicine within the six health studies universities in Romania: Cluj, Bucharest, Timişoara, Iaşi, Craiova and Tg. Mureş.
The personnel lists were compiled from public sources: websites and reports in 2014. We collected publication data from the Scopus data-base for the population of academics we established. We collected information regarding publications, citations, and Hirsch's h-index. We included single and co-authored papers, as well as their respective citations for a five year interval: from 2009 to 2014. We excluded authors' self-citations. The data were collected between November 2014 and May 2015. Full counting was used. We disambiguated manually authors' name and affiliation. We included all papers from authors with multiple affiliation, provided they were published in the 2009–2014 time window. We included in the population all the academics which were considered to belong to health disciplines, according to an official categorization of teaching personnel in health studies (MS, 2009). We excluded only a small minority of the population we compiled e.g. English teachers employed in medicine departments. We identified a set of publication with an outstanding number of citations whose character is different from a standard research article: guidelines, medical procedure recommendations, and definitions. We excluded these articles as well, based on a systematic word search. We then recalculated the number of received citations, the h-index and the g-index for each academic in the data base. The data base was used previously for the analyses presented in Proteasa et al. (2017). A description of the populations we study is presented in Table 1.
Basic statistics. For each university, its name, number of academics(N), mean score(‹•›), standard variance (σ) of the corresponding index, are shown
g-index
‹c›
‹h›
‹g›
Cluj 596 24.75 124.66 1.69 2.23 2.53 4.03
Bucharest 1119 18.41 80.81 1.45 1.99 2.15 3.56
Timişoara 505 15.93 57.61 1.42 2.03 1.93 3.16
Iaşi 547 15.86 75.22 1.54 1.90 2.06 3.08
Craiova 280 14.19 40.95 1.50 1.91 1.98 2.90
Tg. Mureş 327 5.40 22.19 0.83 1.28 1.08 2.04
We illustrate the skewness of the six distribution in Fig.1, where we plotted the distributions of total citations of each of the academics affiliated to the six universities. A visual inspection of the plots reveals that the first quartile and minimum citation of the six universities are all close to zero thus cannot be seen in the figure, since a considerable number of academics received no citations in the time window during which citations were collected. The six distributions are skewed and present a high degree of overlapping, as indicated by their large standard variance and by the large difference between medians and means.
Box-plot for the citation distribution. The citation distributions of six universities are shown in box-plot. The five horizontal lines in each box represent the maximum, third quartile, median, first quartile and minimum citation (the last two lines can not be seen in these figures since they are to close to zero.), respectively. The star markers represent the mean of the citations. There is a huge difference between mean and median in each distribution.
In the following section we will engage with two research questions, conceptualized in the section dedicated to the outlining the method: (1) Is the mean a reliable measure of central tendency for the purpose of establishing a hierarchy of the six medical schools, which can account for the quality of the academics affiliated with them? (2) How can the six medical schools be compared using the minimum representative size?
Pairwise comparison
A first statistic treatment we perform includes pairwise comparison of the six medicine faculties. In this respect, we used distributions of citations(c), values of the h-index(h) and, respectively, the g-index(g). Thus, we calculate the minimum representative size, i.e. the minimum size of a sample of representative academics, κ for each pair of medicine faculties, according to Eq.4. We use notation κ(I)cr$\kappa \left( I \right)_{c}^{r}$to denote the minimum number of representative academics of faculty $r$ when compared with faculty c, based on the index I. For example, κ for Tg. Mureş and Cluj on g-index is κ(g)CLMU=5$\kappa \left( g \right)_{CL}^{MU}=5$and κ(g)MUCL=22$\kappa \left( g \right)_{MU}^{CL}=22$respectively. The considerable difference of mean g-index between these two universities may be the main reason of the small values of the pair of κ. κ(g)MUCL=22$\kappa \left( g \right)_{MU}^{CL}=22$ is larger partially due to the relatively higher heterogeneity of the g-index distribution of Cluj. The value of κ(I)cr$\kappa \left( I \right)_{c}^{r}$ is determined by the overlap between distributions r and c and also by the variances of r and c.
The distributions of the Bootstrap average G(κ), are shown in Fig.2(b). As a comparison, the distributions of individual g-index are shown in Fig.2(a), where there is a larger overlap between the two universities though we can clearly see the difference between their average scores. In Fig.2(b) the overlap is visibly smaller: the standard deviations are much smaller for G(κ), while mean values did not change.
Box-plot for the g-index distribution of Cluj and Tg. Mureş. (a) The original distribution of Cluj and Tg. Mureş. The five horizontal lines in each box represent the maximum, third quartile, median, first quartile and minimum g-index, respectively. The star markers represent the mean g-index. (b) Distribution of the Bootstrap κ-sample average of κ-index, with the minimum representative sizes of the two sets when the two sets are pair-wisely compared.
The complete results of pairwise comparison based on the three indexes, citation, h-index, and g-index are shown in Fig.3(a), (b), and (c), respectively. The size of each circle at position (r, c) is the ratio of mean value of the corresponding index I for university r and c, i.e. srIscI.$\frac{s_{r}^{I}}{s_{c}^{I}}.$ Their magnitude can be assessed against the gray circles along the diagonal, which serve as references and amount to size 1. The color of the other circles represents the value of κ(I)cr.$\kappa \left( I \right)_{c}^{r}.$ The darker the color, the greater the value of κ. For example, the size of circle at the upper right corner of Fig.3(c) corresponds to sCLgsMUg=2.34$\frac{s_{CL}^{g}}{s_{MU}^{g}}=2.34$ and the color of the same circle corresponds to κ(g)MUCL=22.$\kappa \left( g \right)_{MU}^{CL}=22.$ The size of the circle at the lower left corner of Fig.3(c) corresponds to sMUgsCLg=0.43,$\frac{s_{MU}^{g}}{s_{CL}^{g}}=0.43,$ while the color of the same circle corresponds to κ(g)CLMU=5.$\kappa \left( g \right)_{CL}^{MU}=5.$
Heatmap of the pairwise compared. (a) Faculties are sorted in descending order by average citations. The darkness of each circle represents the value of κ(I)cr$\kappa \left( I \right)_{c}^{r}$ for the pair formed by the faculty on row r and the one on column c. The size of the circles represents the ratio of the mean citation for the faculties corresponding to row r and column c. The gray circles on the diagonal are reference circles with size 1. (b) Results are calculated based on h-index. (c) Results are calculated based on g-index.
A visual inspection of Fig.3 reveals that most of the circles corresponding to pairwise comparisons between faculties are leaning towards the dark ends of the spectra. We interpret this observation as proof of the incapacity of the mean scores to account for the differences between the six faculties, when compared in pairs. More than this, some of the values κ takes are greater than size of the faculty, e.g. κ(c)IABU=3188, κ(c)BUIA=2762, but NBU=1119 and NIA=547.$\kappa \left( c \right)_{IA}^{BU}=3188,\,\,\kappa \left( c \right)_{BU}^{IA}=2762,\,\,but\,\,{{N}_{BU}}=1119\,and\,{{N}_{IA}}=547.$In this particular case, the comparison of the two faculties based on their average is unreliable.
Group comparison
In a second statistical treatment, we compare one faculty u against the rest of the faculties, taken as a whole i.e. the reunion of their populations of academics. In other words, we compare the academics within one faculty with the rest of the academics as if they are from a single virtual faculty rest. This approach allows us to perform pairwise comparisons as we did in the previous sub-section, between faculty u and rest in order to compute the values of κ(I)restu,$\kappa \left( I \right)_{rest}^{u},$ for each I index. The results we obtained are presented in Table 2, below. The values of κ(I)urest$\kappa \left( I \right)_{u}^{rest}$ are not included in the table, as they do not serve for the description of the social phenomenon we study.
Values of κrest. For each university, its name, mean score of the rest university (‹c›rest, ‹h›rest, ‹g›rest), and minimum number of representative academics (κrest), calculated by the corresponding index, are shown.
‹c›rest,
‹h›rest
‹g›rest
Cluj 596 15.50 480 1.40 183 1.95 145
Bucharest 1119 16:50 5283 1:44 > 104 2.00 1664
Timişoara 505 17:35 4900 1.45 9066 2.08 1378
Iaşi 547 17:38 7187 1.43 893 2.06 > 104
Craiova 280 17:40 477 1.42 3375 2.06 3634
Tg. Mureş 327 18:39 2 1.51 12 2.16 13
The faculty from Cluj has the highest mean score in all the comparisons we performed, regardless of the index. The corresponding values of the minimum representative size, κrestCL ,$\kappa _{rest}^{CL}\,,$are not small compared to the size of the population, but remain lower than it. At the other extreme, the faculty based in Tg. Mureş has the lowest mean score and smallest values of κrestMU$\kappa _{rest}^{MU}$in all the comparisons. For the other four universities, the values of κrest are all exceeding their number of academics. We interpret these results as proof of the fact that it is reliable to compare the faculty based in Cluj against the rest of the faculties, and, in fact, it exhibits superior values of all the three indexes we used. At the same time, it is also reliable to argue that the faculty from Tg. Mureş exhibits inferior performance compared to the rest of the faculties, under all the three indexes we used as a basis of comparison. Last but not least, our results from both the pair-wise comparison and the one-to-the-rest comparison indicate that it is not reliable to distinguish differences in the performance of the other four faculties (Bucharest, Timişoara, Iaşi, and Craiova), irrespective of the index which is used as a basis for calculation.
In a nut-shell, this contribution consists in applying the minimum representative size, a methodology developed in Shen et al. (2017), to a new empirical context— that of the faculties of medicine in the health studies universities in Romania, previously studied by Proteasa et al. (2017). The "quality" of the academics affiliated to the six faculties located in Cluj, Bucharest, Timişoara, Iaşi, Craiova, and Tg. Mureş is measured by the total citations received by each academic, and the respective values of the h-index and g-index. We performed pair-wise comparison and one-to-the-rest comparison. We found that, when the population of academics from Cluj is compared to the others, in either of the two methods of comparison, the minimum representative size is reasonably small. We interpret this finding as a reliable indication of superior performance, in relation to all three indexes used in this article. We also find that when the population of academics affiliated to Tg. Mureş is compared to the others, in either of the two methods of comparison, the minimum representative size is quite small, thus it is reliable to say that its performance is inferior to the other faculties, on the three measures used in this work. For the rest of the faculties we investigated in this work, we cannot rel iably distinguish differences among them, since their minimum representative size are all quite large, sometimes even bigger than their own sizes.
One might think that these results which substantiate that the faculties located in Cluj and Tg. Mureş are quite different from the rest, while the others are rather similar is trivial. It can be argued that a similar conclusion can be reached by a simple comparison of the mean scores in Table 1. We emphasize that the method we unfolded in this article which builds on the concept of the minimum representative size (κ), especially the relation between κ and the size of the whole population N, represents a validation, in this case, of the falsifiable hypothesis which can be derived from a simple comparison of the averages. When κ << N the hypothesis is validated and such a comparison is reasonable, while when κ ~ N or even κ > N, then the hypothesis is invalidated, and the hierarchy of the averages represents more a numerical artifact, than a substantial property of the distributions, thus such a comparison is unreliable. That case is exemplified through the four faculties whose corresponding minimum representative size exceeded the size of their distributions. To conclude, the minimum representative size relative to the size of the population proved to be a useful and reliable ancillary indicator to the mean scores.
We consider our findings are particularly relevant in situations when aggregate scores are computed for the purpose of ranking data sets associated with different collective units, such as faculties, universities, journals etc. Whenever one wants to distinguish the performance of two collective units, the minimum representative size of pair-wise comparison should be calculated first as an indication of the reliability of the comparison of the means. When κ is small compared to size of the set, then the mean can be seen as a good representative value of a Bootstrapped κ number of samples from the set. Whenever one is interested in comparing one particular collective unit against a group of similar units, such as the medicine faculties in health studies universities in Romania, then κ calculated by one-to-the-rest comparison would be appropriate. In both cases, those groups whose κ are comparable or even bigger than their group sizes should be discarded from such comparisons. A small κ is an indication of the consistency of a data set, which is an attribute that, we consider, should be assessed before ranking collective units consisting from individuals with different performance levels.
ARWU (2018). Ranking Methodology of Academic Ranking of World Universities—2017. Retrieved from: http://www.shanghairanking.com/ARWU-Methodology-2017.htmlARWU2018Ranking Methodology of Academic Ranking of World Universities—2017Retrieved fromhttp://www.shanghairanking.com/ARWU-Methodology-2017.htmlSearch in Google Scholar
CWTS (2018). CWTS Leiden Ranking. Retrieved from: http://www.leidenranking.com/information/indicatorsCWTS2018CWTS Leiden RankingRetrieved fromhttp://www.leidenranking.com/information/indicatorsSearch in Google Scholar
Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69, 131–152.EggheL.2006Theory and practise of the g-indexScientometrics69131–15210.1007/s11192-006-0144-7Search in Google Scholar
Egghe, L. (2008). Modelling successive h-indices. Scientometrics, 77, 377–387.EggheL.2008Modelling successive h-indicesScientometrics77377–38710.1007/s11192-007-1968-5Search in Google Scholar
Hazelkorn, E. (2011). Rankings and the reshaping of higher education: the battle for world-class excellence. Houndmills Basingstoke Hampshire; New York: Palgrave Macmillan.HazelkornE.2011Rankings and the reshaping of higher education: the battle for world-class excellence. Houndmills Basingstoke HampshireNew YorkPalgrave Macmillan10.1057/9780230306394Search in Google Scholar
Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572.HirschJ. E.2005An index to quantify an individual's scientific research outputProceedings of the National Academy of Sciences of the United States of America10216569–1657210.1073/pnas.0507655102Search in Google Scholar
Lotka, A. J. (1926). The frequency distribution of scientific productivity. Journal of the Washington Academy of Sciences, 16, 317–323.LotkaA. J.1926The frequency distribution of scientific productivityJournal of the Washington Academy of Sciences16317–323Search in Google Scholar
Molinari, J. F., & Molinari, A. (2008). A new methodology for ranking scientific institutions. Scientometrics. Retrieved from: https://akademiai.com/doi/abs/10.1007/s11192-007-1853-2MolinariJ. F.&MolinariA.2008A new methodology for ranking scientific institutions. ScientometricsRetrieved fromhttps://akademiai.com/doi/abs/10.1007/s11192-007-1853-210.1007/s11192-007-1853-2Search in Google Scholar
MS (2009). Ordin nr. 1509/2008 privind aprobarea Nomenclatorului de specialităi medicale, medico-dentare şi farmaceutice pentru reeauă de asistenă medicală. Retrieved from: https://www.eshg.org/fileadmin/www.eshg.org/documents/countries/RomanMS2009Ordin nr. 1509/2008 privind aprobarea Nomenclatorului de specialităi medicale, medico-dentare şi farmaceutice pentru reeauă de asistenă medicalăRetrieved fromhttps://www.eshg.org/fileadmin/www.eshg.org/documents/countries/RomanSearch in Google Scholar
Proteasa, V., Păunescu, M., & Miroiu, A. (2017). An extension of the characteristics scales and scores: Isolating interinstitutional from intra-institutional variance in highly skewed real population distributions. In 16th International Conference on Scientometrics & Informetrics, ISSI 2017 (pp. 1192–1203).ProteasaV.PăunescuM.&MiroiuA.2017An extension of the characteristics scales and scores: Isolating interinstitutional from intra-institutional variance in highly skewed real population distributionsIn 16th International Conference on Scientometrics & Informetrics, ISSI 2017 (pp. 1192–1203)Search in Google Scholar
QS (2016). QS World University Rankings. Retrieved from: https://www.topuniversities.com/qs-world-university-rankings/methodol.QS2016QS World University RankingsRetrieved fromhttps://www.topuniversities.com/qs-world-university-rankings/methodolSearch in Google Scholar
Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science; New York, N.Y., 43.SeglenP. O.1992The skewness of science. Journal of the American Society for Information ScienceNew York, N.Y.4310.1002/(SICI)1097-4571(199210)43:9<628::AID-ASI5>3.0.CO;2-0Search in Google Scholar
Shen, Z., Yang, L., Di, Z., & Wu, J. (2017). How large is large enough? In 16th International Conference on Scientometrics & Informetrics, ISSI 2017 (pp.302–313).ShenZ.YangL.DiZ.&WuJ.2017How large is large enough? In 16th International Conference on Scientometrics & InformetricsISSI2017pp. 302–313Search in Google Scholar
Tol, R. S. J. (2008). A rational, successive g-index applied to economics departments in Ireland. Journal of Informetrics, 2, 149–155.TolR. S. J.2008A rational, successive g-index applied to economics departments in IrelandJournal of Informetrics2149–15510.1016/j.joi.2008.01.001Search in Google Scholar
Wasserman, L. (2004). All of Statistics. Springer New York.WassermanL.2004All of StatisticsSpringer New York10.1007/978-0-387-21736-9Search in Google Scholar
Identifying grey-rhino in eminent technologies via patent analysis
Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases① | CommonCrawl |
Quasimorphism
In group theory, given a group $G$, a quasimorphism (or quasi-morphism) is a function $f:G\to \mathbb {R} $ which is additive up to bounded error, i.e. there exists a constant $D\geq 0$ such that $|f(gh)-f(g)-f(h)|\leq D$ for all $g,h\in G$. The least positive value of $D$ for which this inequality is satisfied is called the defect of $f$, written as $D(f)$. For a group $G$, quasimorphisms form a subspace of the function space $\mathbb {R} ^{G}$.
Examples
• Group homomorphisms and bounded functions from $G$ to $\mathbb {R} $ are quasimorphisms. The sum of a group homomorphism and a bounded function is also a quasimorphism, and functions of this form are sometimes referred to as "trivial" quasimorphisms.[1]
• Let $G=F_{S}$ be a free group over a set $S$. For a reduced word $w$ in $S$, we first define the big counting function $C_{w}:F_{S}\to \mathbb {Z} _{\geq 0}$, which returns for $g\in G$ the number of copies of $w$ in the reduced representative of $g$. Similarly, we define the little counting function $c_{w}:F_{S}\to \mathbb {Z} _{\geq 0}$, returning the maximum number of non-overlapping copies in the reduced representative of $g$. For example, $C_{aa}(aaaa)=3$ and $c_{aa}(aaaa)=2$. Then, a big counting quasimorphism (resp. little counting quasimorphism) is a function of the form $H_{w}(g)=C_{w}(g)-C_{w^{-1}}(g)$ (resp. $h_{w}(g)=c_{w}(g)-c_{w^{-1}}(g))$.
• The rotation number ${\text{rot}}:{\text{Homeo}}^{+}(S^{1})\to \mathbb {R} $ is a quasimorphism, where ${\text{Homeo}}^{+}(S^{1})$ denotes the orientation-preserving homeomorphisms of the circle.
Homogeneous
A quasimorphism is homogeneous if $f(g^{n})=nf(g)$ for all $g\in G,n\in \mathbb {Z} $. It turns out the study of quasimorphisms can be reduced to the study of homogeneous quasimorphisms, as every quasimorphism $f:G\to \mathbb {R} $ is a bounded distance away from a unique homogeneous quasimorphism ${\overline {f}}:G\to \mathbb {R} $, given by :
${\overline {f}}(g)=\lim _{n\to \infty }{\frac {f(g^{n})}{n}}$.
A homogeneous quasimorphism $f:G\to \mathbb {R} $ has the following properties:
• It is constant on conjugacy classes, i.e. $f(g^{-1}hg)=f(h)$ for all $g,h\in G$,
• If $G$ is abelian, then $f$ is a group homomorphism. The above remark implies that in this case all quasimorphisms are "trivial".
Integer-valued
One can also define quasimorphisms similarly in the case of a function $f:G\to \mathbb {Z} $. In this case, the above discussion about homogeneous quasimorphisms does not hold anymore, as the limit $\lim _{n\to \infty }f(g^{n})/n$ does not exist in $\mathbb {Z} $ in general.
For example, for $\alpha \in \mathbb {R} $, the map $\mathbb {Z} \to \mathbb {Z} :n\mapsto \lfloor \alpha n\rfloor $ is a quasimorphism. There is a construction of the real numbers as a quotient of quasimorphisms $\mathbb {Z} \to \mathbb {Z} $ by an appropriate equivalence relation, see Construction of the reals numbers from integers (Eudoxus reals).
Notes
1. Frigerio (2017), p. 12.
References
• Calegari, Danny (2009), scl, MSJ Memoirs, vol. 20, Mathematical Society of Japan, Tokyo, pp. 17–25, doi:10.1142/e018, ISBN 978-4-931469-53-2
• Frigerio, Roberto (2017), Bounded cohomology of discrete groups, Mathematical Surveys and Monographs, vol. 227, American Mathematical Society, Providence, RI, pp. 12–15, arXiv:1610.08339, doi:10.1090/surv/227, ISBN 978-1-4704-4146-3, S2CID 53640921
Further reading
• What is a Quasi-morphism? by D. Kotschick
| Wikipedia |
\begin{definition}[Definition:Max Operation/General Definition]
Let $\struct {S, \preceq}$ be a totally ordered set.
Let $S^n$ be the cartesian $n$th power of $S$.
The '''max operation''' is the $n$-ary operation on $\struct {S, \preceq}$ defined recursively as:
:$\forall x := \family {x_i}_{1 \mathop \le i \mathop \le n} \in S^n: \map \max x = \begin{cases}
x_1 & : n = 1 \\
\map \max {x_1, x_2} & : n = 2 \\
\map \max {\map \max {x_1, \ldots, x_{n - 1} }, x_n} & : n > 2 \\
\end{cases}$
where $\map \max {x, y}$ is the binary max operation on $S^2$.
\end{definition} | ProofWiki |
Masters and Ph.D.
Author: resuly (page 1 of 2)
VR Driving Simulator Platform
This research is to integrate communication, traffic and driver behaviours, vehicle dynamics, and environment conditions in a unison framework to increase safety, reliability, and performance of connected and automated vehicles within a mix traffic condition.
To achieve this goal, TUPA has been developing an architecture that leverages existing tools like VISSIM, Driving simulator, Virtual Reality and Python to analyze and simulate multiple aspects of traffic including vehicles and pedestrians. This tool will enable to develop and validate techniques to improve traffic congestion, safety, and network performance.
In designing scenarios for the integration of Vissim and Driving simulator, a robust algorithm has been successfully implemented in the platform. This algorithm adapts its behaviour (traffic flow in transport) autonomously in response to the variation of network conditions.
Having human in the loop setting, alternative route guidance information will be displayed in driving simulator so that driver's decision if he or she is willing to change the route based on the information will be recorded. In this process, the subject vehicle in the simulator and surrounding traffic in VISSIM will interact each other. In addition, all recoded outcomes from the driving simulator will be input to traffic simulation for the large network evaluation. The developed traffic simulation will identify typical traffic indicators such as delay and extreme delay, queue and the number of stops based on the scenarios developed in the driving simulator.
This research keeps developing with the KAIST team since middle of 2019. Transportation experts are aware that it is urgent to take measures to cope with mixed traffic between autonomous vehicles and conventional vehicles, which will be indispensable in the future. It is expected that this new traffic flow not only directly and indirectly leads to traffic accidents but also has a serious side effect on traffic efficiency. Until a fully autonomous vehicle occupies the road, new traffic control techniques are needed. In this study, we propose a new traffic signal controller that supports autonomous driving by driving the driver safely through the virtual environment. This study is expected to be synergistic effect of performance evaluation for future traffic in conjunction with the Connected ITS project and the K-City autonomous vehicle test bed currently underway in Korea. It is expected that many universities and research institutes will benefit from this research since Korea has not developed a human participatory machine learning platform using virtual reality or is in the very early stage.
Big data Analytic and Visualization
The dashboard below was developed through Elastic open source software using the Seoul metro passenger flow data in 2014.
Data visualization has been important in democratizing data and analytics and making data-driven insights available to workers throughout an organization. Data visualization also plays an important role in big data and advanced analytics projects. As a transportation field accumulated massive troves of data during the early years of the big data trend, they needed a way to quickly and easily get an overview of their data. Visualization tools were a natural fit.
Visualization is central to advanced analytics for similar reasons. When advanced predictive analytics or machine learning algorithms are available, it becomes important to visualize the outputs to monitor results and ensure that models are performing as intended. This is because visualizations of complex algorithms are generally easier to interpret than numerical outputs.
In TUPA, we utilized one of the strongest searching engines called Elasticsearch to make this visualization works. The flowchart below shows the process to deal with big data and visualize it.
For more detailed information please contact our TUPA members below;
Xu Yanping, [email protected]
Cheng Lyu, [email protected]
SLAM with Autonomous vehicle using AI
We utilized TurtleBot3 which adopts ROBOTIS smart actuator Dynamixel for driving.
TurtleBot3 is a ROS-based mobile robot. We customized it to reconstruct the mechanical parts and use optional parts such as the computer and sensor. SLAM, Navigation and Manipulation, makes it to build a map and can drive around the room. Also, it can be controlled remotely from a laptop, joypad or Android-based smart phone.
The project allows the robot to detect the lane(s) and obstacles to avoid. Various algorithms such as SLAM, CNN, LSTM and OpenManipulator are embedded in the robot to make better robot behavior.
The following video demonstrates the navigation function.
Taeho Oh, [email protected]
Xu Bicheng, [email protected]
GWR in Sharing bikes
Research Aim
To provide empirical evidence on the relationship between built environment and public sharing bike flow in Suzhou, China.
To examine the global impacts of built environment on public sharing bike flow.
To understand the effects of spatial variation of those built environment on public sharing bike flow
The study area of this research focuses on Suzhou located in the southeast Jiangsu Province of East China and east about 100 km to Shanghai (Figure 1(A)).
There are around 1,750 bike stations and 40,000 public sharing bikes put into use in Suzhou (Figure 1(B)).
Fig. 1: Study area. (A) Location of Suzhou in China; and (B) the spatial distribution of bike stations, metro stations and population density in urban area of Suzhou.
Operationalization of Variables
Global Regression
Geographically Weighted Regression (GWR)
GWR is a local regression model. Coefficients are allowed to vary.
Bi-squared Weighting Function
Table 1: The results of Global Regression.
Dependent variables Global Regression
Trips on workdays Trips on nonwork days
Coeff. (t-value) Coeff. (t-value)
Intercept -75.388 (-8.27) -67.948 (-7.25)
Attributes of public bike systems
Capacity of bike stations 3.508 (11.88) 3.372 (11.11)
Accessibility to bike stations 33.441 (14.66) 27.991 (11.93)
Population density -2.2E-04 (-2.13) -2.2E-04 (-2.00)
Accessibility to metro station 4.852 (5.83) 3.803 (4.44)
Accessibility to shopping mall 8.254 (4.23) 10.003 (4.99)
Accessibility to bus station 6.719 (3.23) 6.054 (2.83)
Accessibility to restaurant 1.075 (5.82) 1.491 (7.84)
Accessibility to dwelling 0.544 (0.49) 0.411 (0.36)
Accessibility to local financial services 11.608 (6.84) 11.494 (6.59)
Accessibility to public leisure and religion place -3.258 (-2.22) -0.906 (-0.60)
Accessibility to public park -7.613 (-1.22) -1.131 (-0.18)
Accessibility to educational place 7.276 (3.83) 9.130 (4.68)
Accessibility to workplace 5.190 (3.78) -1.610 (-1.14)
R-square 0.392 0.360
Adjusted R-square 0.387 0.355
Note: Values in bold are significant at 0.1 level.
Fig. 3: Comparisons of explanatory power of Global regression and GWR.
Fig. 4: Spatial distributions of local coefficients on working day and t-value with significance less than 90%.
The capacity and proximity of bike stations are positively correlated with bike usage.
Gravity-based accessibility to metro stations of bike stations may increase bike flow.
The bike stations nearby shopping malls, bus stations, restaurants, financial and educational places are also positively correlated with bike usage.
Population density has a statistically negative impact on bike usage.
The effects of built environment are divergent across the Suzhou region.
Most of the coefficient appears to have zero or negative value in the central areas of Suzhou (Old Town) while surrounding areas have modest built environment effect on bike flows.
The goodness of fit in the GWR is better than the global regression model.
This work was supported by Jiangsu Industrial Technology Research Institute and Research Institute of Future Cities at Xi'an Jiaotong-Liverpool University.
Chunliang Wu, [email protected]
Imputation of Missing data
Transportation data is of great importance for intelligent transportation system. Missing data problems are inevitable during data collection.
Challenges in existing imputation methods: potential useful information is not efficiently used in the modeling process; methods considering temporal correlation usually assuming that linear relationships exist between observed variables and latent variables; most techniques fail to measure the uncertainty.
This study introduces the use of a self-measuring multi-task Gaussian process (SM-MTGP) method for imputing missing data.
A SM-MTGP method is proposed to combine features from tasks and inputs to measure similarities jointly.
Dependencies of tasks and inputs are explored via covariance functions under SM-MTGP framework.
Correlations between responses are captured to provide additional information for enhancing imputation accuracy.
Brief review of MTGP
Assuming we have \(Q\) tasks and a set of observations \(Y = \left\{ {{y_{i1}},{y_{i2}}, \ldots {y_{iD}}} \right\}, i = 1, \ldots ,Q\), for each corresponding task at \(????\) distinct inputs, where \(????_{????????}\) is the response for \(????^{????ℎ}\) task given the input \(????_????\).
FIGURE 1 Vectorization of matrix Y
When the SM-MTGP model is introduced to the imputation of missing values of transfer passenger flow, the shared information of tasks is considered in terms of the temporal relatedness of various days. Transfer passenger flow over \(Q\) days can be treated as \(Q\) tasks, and the number of sampling time intervals \(D\) per day represents \(D\) distinct inputs. We define a matrix \(Y = \left\{ {{y_{ij}}} \right\}(i = 1,2,…,Q;j = 1,2,…D) \in Q \times D \), where \({{y_{ij}}}\) is number of transfer passengers for the \({i^{th}}\) day (task) on the \({j^{th}}\) time interval (input). By stacking the column vectors of \(Y \in Q \times D \), a \(Q \times D\) dimension vector \({\bf{y}} = vec(Y)\) is obtained (Figure 1).
The MTGP model of \({{\bf{\tilde y}}}\)can be described as Equation (1):
$${{\tilde y}_{ij}} = {m_{ij}} + \varepsilon ,\quad \varepsilon \sim N\left( {0,{\sigma ^2}} \right) \tag{1}$$ where \({m_{ij}}\) is the expected value of the element \({{\tilde y}_{ij}}\), and \(\varepsilon\) is an additive Gaussian noise with variance \({{\sigma ^2}}\).
$$m \sim N\left( {0,{\Sigma _Q} \otimes {\Sigma _D}} \right) \label{TGP} \tag{2}$$ $${\Sigma _Q} = K_Q^fG_Q^m, \quad{\Sigma _D} = K_D^fG_D^m \label{covariance matrix} \tag{3}$$ The covariance matrices \({\Sigma _Q}\) are defined as a product of kernel of days features (tasks) \(K_Q^f\) and the self-measuring kernel \(G_Q^m\), and \({\Sigma _D}\) are defined as a product of the kernel of time intervals features (inputs) \(K_D^f\) and self-measuring kernel \(G_D^m\).
$${K_Q^f} = k\left( {{y_i},{y_j}} \right) \in \mathbb{R}^{Q \times Q}, \quad {G_Q^m} = g\left( {{y_{i:}},{y_{j:}}} \right) \in \mathbb{R}^{Q \times Q} \tag{4}$$ $${K_D^f} = k\left( {{y_h},{y_l}} \right) \in \mathbb{R}^{D \times D}, \quad {G_D^m} = g\left( {{y_{:h}},{y_{:l}}} \right) \in \mathbb{R}^{D \times D} \tag{5}$$ where \(k\left( {{y_i},{y_j}} \right)\) and \(k\left( {{y_h},{y_l}} \right)\) indicate covariances of features of \({i^{th}}\) day and \({j^{th}}\) day, and covariances of features of \({h^{th}}\) time interval and \({l^{th}}\) time interval, respectively. Similarly, \(g\left( {{y_{i:}},{y_{j:}}} \right)\) and \(g\left( {{y_{:h}},{y_{:l}}} \right)\) measure covariances of self-measuring observations of \({i^{th}}\) day and \({j^{th}}\) day, and covariances of self-measuring observations of \({h^{th}}\) time interval and \({l^{th}}\) time interval.
By following the principle of MTGP, the joint distribution of \({\tilde Y}\) can be described as Equation (6), where \(\Phi = {\Sigma _Q} \otimes {\Sigma _D} + {\sigma ^2}{\bf{I}}\).
$$\int {p\left( {\tilde Y|M,0,{\sigma ^2}} \right)} p\left( {M|{\Sigma _Q},{\Sigma _D}} \right)dM = N\left( {{\bf{\tilde y}}|{\bf{0}},\Phi } \right) \tag{6}$$ Using a Gussian process framework given the observed number of transfer passengers, the unobserved passenger flows in \(Y\) can be derived by predictive equation (7). $$E[{{\tilde y}_{ab}}|{{{\bf{\tilde y}}}_{obs}},{\Sigma _Q},{\Sigma _D}] = \left( {{\Sigma _{{Q_a}}} \otimes {K_{{D_b}}}} \right)_{obs}^T\Phi _{obs}^{ – 1}{{{\bf{\tilde y}}}_{obs}} \tag{7}$$ where \({\Phi _{obs}} = {\bf{P}}\Phi {{\bf{P}}^T} \in \mathbb{R}^{M \times M} \) is a covariance matrix over the observed transfer passenger flows in \(Y\), \({{\Sigma _{{Q_a}}}}\) denotes \({a^{th}}\) column vector in \({\Sigma _Q}\), which measures the similarities between \({a^{th}}\) day and all the other days among \(Q\) days, and \({{K_{{D_b}}}}\) indicates \({b^{th}}\) column vector of \({K_D}\), which represents covariance between \({b^{th}}\) time interval and all the remaining time intervals of \(D\) samples.
Data analysed includes 6-months of passenger flow data collected by WiFi sensors at Richmond railway station (Figure 2), Melbourne, Australia.
FIGURE 2 Map of location and train lines of Richmond station.
FIGURE 3 Map of 12 WiFi sensors distribution.
The deployed 12 sensors are distributed at platforms 7-10 and two sided underpasses (Figure 3).
IMPUTATION PERFORMANCE
Figure 4 indicates the RMSE results of discrete missing pattern with various algorithms. The improvements in RMSE by SM-MTGP is around 60% for three different missing rates.
FIGURE 4 RMSE Results for Different Missing Ratios of Discrete missing data.
Three mixed missing patterns under different missing ratios are reported (Figure 5-7). The SM-MTGP method is still able to obtain better performance compared with all the other methods, leading to improvements in RMSE up to 60%.
FIGURE 5 RMSE Results for Different Missing Ratios of Mixed Missing Data
with One Random Day Missing.
with Two Random Day Missing.
with Four Random Day Missing.
Imputation accuracy can achieve around 60% improvement in RMSE in all the tested missing scenarios compared with the base model.
SM-MTGP significantly outperforms other methods under the large missing ratio.
On-going research on incorporating other features into this algorithm to make application on large-scale transit network and simplifying model computational complexity.
Wenhua Jiang, [email protected]
3D Convolutional Networks for Traffic Forecasting
It's not easy to pick up this background image for the post. It shows the Manhattan Peninsula, New York, and the sky reflects the buildings on the ground. This fantasy scene reminds me the complex relationship between space and time, which closely related to the topic of this article: Spatiotemporal forecasting of traffic by using 3d convolutional neural networks.
On December 31, 2014, a deadly stampede occurred in Shanghai, near Chen Yi Square on the Bund, where around 300,000 people had gathered for the new year celebration. 36 people were killed and there were 49 injured, 13 seriously (Wikipedia). From the follow-up reports, it can be known that Tencent's user online information has roughly detected that the traffic in the area was too dense. So data from social media and cell phone signal can infer the regional crowd density. If the corresponding predictions and analysis can be made, such tragedies will not happen.
Traffic forecasting has been studied for decades. There are many outcomes of models and theories. For the regional prediction, it's a prevalent way to split the research area into grids and analysis them by computer vision models. Each square in the girds just like the pixel in an image.
Zhang (2016) presented the classic Deep-ST model. It treats the research area as image and the predicted result combined with the convolutional models from different periods. However, the convert not only limited to traffic volume. The vehicle speed can also be transformed into images. The images below shows the traffic speed representation in a small-scale transportation network (Yu, et al 2017).
With the development of computer vision, deeper neural networks like ResNet has been presented recently. Zhang (2017) also upgraded the DeepST to ST-ResNet with ResNet models. However, the convolutional kernel in these models only focuses on spatial relations, not for a spatiotemporal space. It's been proved that 3D Convolutional Networks can learn the spatiotemporal features (Tran 2015). But these improvements only appears in video related studies like behavior detection, human action detection, etc. So in this post, we will see the application of 3D ConvNets on traffic problems.
Convolutional operations
There are some different operations with the convolutional kernel in hidden layers. The following diagrams show the 2D convolutional operation with padding and dilation (Dumoulin and Visin, 2016 ). It's different in 3D, but the ideas are same.
With the 3D convolutional operation, the kernel shape is 3 dimensional and it moves in 3 directions. Just like the animation below. For the transportation problems, the directions are latitude, longitude and time. In the model part, we also used padding and dilation with 3D kernels.
Data and Model
We take the bike sharing data in New York (BikeNYC) from the DeepST paper as an example here. Every circle stands for a station and the color means the number of bikes in dock. The research area is split into 16*8 grids, each square has in and out flow at a particular moment. The in and out flow are numbers of return and borrow bikes in the corresponding region.
The input is a time sequence from the time level of closeness, period and trend. They stand for the different extract frequency from raw data. If stack them together, the input shape would be X168. The X is the number of timesteps.
The model has three 3D convolutional layers and the flatten layer combines the external data like weather and holidays.
I used Pytorch this time. The windows version just came out last month. It provides a lot of API and very easy to build a custom model structure. The author of ST-ResNet has opened his code, so we can reuse the dataset from Github.
To get the best kernel size, we take different combinations for training test. It turns out that 3*3*3 is also the optimal option for transportation data just like other papers have pointed out in some behavior detection tasks.
Although the model is much simpler than Deep-ST or ST-ResNet, it still achieved the best performance on BikeNYC and TaxiBJ datasets.
Firstly, I have tried the Matplotlib in Python. The outcomes are good but they are static and not appropriate for the webpage. So I chose d3.js to draw the diagram from scratch. (The style with CSS and SVG tags is incompatible with this blog responsive CSS sheet. So I just give up displaying it on this page. Here is the gif version, simple and straightforward.)
Play around with this diagram: http://resuly.me/projects/3dconvs/
Zhang, J., Zheng, Y., Qi, D., Li, R., & Yi, X. (2016, October). DNN-based prediction model for spatio-temporal data. In Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (p. 92). ACM.
Zhang, J., Zheng, Y., & Qi, D. (2017, February). Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. In AAAI (pp. 1655-1661).
Yu, H., Wu, Z., Wang, S., Wang, Y., & Ma, X. (2017). Spatiotemporal recurrent convolutional networks for traffic prediction in transportation networks. Sensors, 17(7), 1501.
Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015, December). Learning spatiotemporal features with 3d convolutional networks. In Computer Vision (ICCV), 2015 IEEE International Conference on (pp. 4489-4497). IEEE.
Vincent Dumoulin, Francesco Visin – A guide to convolution arithmetic for deep learning
公共自行车"在行动" —— 记苏州公共自行车使用
Part One —— 公共自行车在中国/苏州
公共自行车(BSs) 被公认为一种绿色、健康可持续的交通方式,并可提供与公交的最后一公里接驳服务。近年来,全世界范围的公共自行车发展迅速,截止2016年,已有超过1200个公共自行车系统部署在全世界范围,其中最大和发展最快的公共自行车国家就在中国——2016年底,中国已在430个城市和地区构建了公共自行车系统,意大利和美国紧随其后(无桩的共享单车不算在内)。 Read more
A gated community in Shanghai
Opening Gated Community
Chinese leadership under Xi Jinping is keen to improve traffic congestion in a variety of ways. Among pledges to create greener cities with more public transport, the ruling contained the apparently throwaway line that no more enclosed residential compounds will be built in principle and existing residential and corporate compounds will gradually open up so the interior roads can be put into public use. This would save land and help reallocate transport networks.
Our research team is now closely working with a Suzhou local company, CCDI to develop the index to evaluate the impact after implementing this program in China.
Proposal Granted By Wuxi Institute of Safety and Security
感谢函(01-2-Inhi Kim)
Characteristics of Driving Rage and Intervention Method in China
Photo from Nash&Franciskato
Road rage is aggressive or angry behavior by a driver of an automobile or other road vehicle. Such behavior might include rude gestures, verbal insults, deliberately driving in an unsafe or threatening manner, or making threats. Road rage can lead to altercations, assaults, and collisions that result in injuries and even deaths. It can be thought of as an extreme case of aggressive driving.
Based on the characteristics, we analyse the road rage behaviours and suggest the control method.
© 2021 TUPA. All rights reserved. | CommonCrawl |
Qualitative analysis of integro-differential equations with variable retardation
Kuo-Shou Chiu
Departamento de Matemática, Facultad de Ciencias Básicas, Universidad Metropolitana de Ciencias de la Educación, José Pedro Alessandri 774, Santiago, Chile
Received August 2020 Revised December 2020 Published February 2022 Early access February 2021
Fund Project: This research was in part supported by PGI 03-2020 DIUMCE
Figure(15)
In this paper, the global exponential stability and periodicity are investigated for impulsive neural network models with Lipschitz continuous activation functions and generalized piecewise constant delay. The sufficient conditions for the existence and uniqueness of periodic solutions of the model are established by applying fixed point theorem and the successive approximations method. By constructing suitable differential inequalities with generalized piecewise constant delay, some sufficient conditions for the global exponential stability of the model are obtained. The methods, which does not make use of Lyapunov functional, is simple and valid for the periodicity and stability analysis of impulsive neural network models with variable and/or deviating arguments. The results extend some previous results. Typical numerical examples with simulations are utilized to illustrate the validity and improvement in less conservatism of the theoretical results. This paper ends with a brief conclusion.
Keywords: Impulsive neural networks, Piecewise constant delay of generalized type, Periodic solutions, Asymptotic stability, Global exponential stability, Gronwall integral inequality.
Mathematics Subject Classification: Primary: 92B20, 34A37; Secondary: 34A36, 34K13, 34D23, 34K20.
Citation: Kuo-Shou Chiu. Periodicity and stability analysis of impulsive neural network models with generalized piecewise constant delays. Discrete & Continuous Dynamical Systems - B, 2022, 27 (2) : 659-689. doi: 10.3934/dcdsb.2021060
M. U. Akhmet and E. Yımaz, Impulsive Hopfield-type neural network system with piecewise constant argument, Nonlinear Anal. Real World Appl., 11 (2010), 2584-2593. doi: 10.1016/j.nonrwa.2009.09.003. Google Scholar
E. Barone and C. Tebaldi, Stability of equilibria in a neural network model, Math. Meth. Appl. Sci., 23 (2000), 1179-1193. doi: 10.1002/1099-1476(20000910)23:13<1179::AID-MMA158>3.0.CO;2-6. Google Scholar
S. Busenberg and K. Cooke, Vertically Transmitted Diseases: Models and Dynamics in Biomathematics, vol. 23, Springer-Verlag, Berlin, 1993. doi: 10.1007/978-3-642-75301-5. Google Scholar
Z. Cai, J. Huang and L. Huang, Generalized Lyapunov-Razumikhin method for retarded differential inclusions: Applications to discontinuous neural networks., Discrete and Continuous Dynamical Systems - B, 22 (2017), 3591-3614. doi: 10.3934/dcdsb.2017181. Google Scholar
J. Cao, Global asymptotic stability of neural networks with transmission delays, International Journal of Systems Science, 31 (2000), 1313-1316. doi: 10.1080/00207720050165807. Google Scholar
K.-S. Chiu and M. Pinto, Variation of parameters formula and Gronwall inequality for differential equations with a general piecewise constant argument, Acta Math. Appl. Sin. Engl. Ser., 27 (2011), 561-568. doi: 10.1007/s10255-011-0107-5. Google Scholar
K.-S. Chiu and M. Pinto, Periodic solutions of differential equations with a general piecewise constant argument and applications, E. J. Qualitative Theory of Diff. Equ., 46 (2010), 1-19. doi: 10.14232/ejqtde.2010.1.46. Google Scholar
K.-S. Chiu, M. Pinto and J.-Ch. Jeng, Existence and global convergence of periodic solutions in recurrent neural network models with a general piecewise alternately advanced and retarded argument, Acta Appl. Math., 133 (2014), 133-152. doi: 10.1007/s10440-013-9863-y. Google Scholar
K.-S. Chiu, Existence and global exponential stability of equilibrium for impulsive cellular neural network models with piecewise alternately advanced and retarded argument, Abstract and Applied Analysis, 2013 (2013), Article ID 196139, 13 pages. doi: 10.1155/2013/196139. Google Scholar
K.-S. Chiu and J.-Ch. Jeng, Stability of oscillatory solutions of differential equations with general piecewise constant arguments of mixed type, Math. Nachr., 288 (2015), 1085-1097. doi: 10.1002/mana.201300127. Google Scholar
K.-S. Chiu, Exponential stability and periodic solutions of impulsive neural network models with piecewise constant argument, Acta Appl. Math., 151 (2017), 199-226. doi: 10.1007/s10440-017-0108-3. Google Scholar
K.-S. Chiu, Asymptotic equivalence of alternately advanced and delayed differential systems with piecewise constant generalized arguments, Acta Math. Sci., 38 (2018), 220-236. doi: 10.1016/S0252-9602(17)30128-5. Google Scholar
K.-S. Chiu and T. Li, Oscillatory and periodic solutions of differential equations with piecewise constant generalized mixed arguments, Math. Nachr., 292 (2019), 2153-2164. doi: 10.1002/mana.201800053. Google Scholar
K.-S. Chiu and F. Córdova-Lepe, Global exponential periodicity and stability of neural network models with generalized piecewise constant delays, Mathematica Slovaca (2021) appear. Google Scholar
K.-S. Chiu, Green's function for impulsive periodic solutions in alternately advanced and delayed differential systems and applications, Commun. Fac. Sci. Univ. Ank. Ser. A1 Math. Stat., 70 (2021), 15-37. doi: 10.31801/cfsuasmas.785502. Google Scholar
L. O. Chua and L. Yang, Cellular neural networks: Theory, IEEE Trans. Circuits Syst., 35 (1988), 1257-1272. doi: 10.1109/31.7600. Google Scholar
K. Gopalsamy, Stability of artificial neural networks with impulses, Appl. Math. Comput., 154 (2004), 783-813. doi: 10.1016/S0096-3003(03)00750-1. Google Scholar
C.-H. Hsu and S.-Y. Yang, Structure of a class of traveling waves in delayed cellular neural networks, Discrete and Continuous Dynamical Systems - A, 13 (2005), 339-359. doi: 10.3934/dcds.2005.13.339. Google Scholar
Z. K. Huang, X. H. Wang and F. Gao, The existence and global attractivity of almost periodic sequence solution of discrete-time neural networks, Phys. Lett. A, 350 (2006), 182-191. doi: 10.1016/j.physleta.2005.10.022. Google Scholar
O. M. Kwona, S. M. Lee, J. H. Park and E. J. Cha, New approaches on stability criteria for neural networks with interval time–varying delays, Appl. Math. Comput., 218 (2012), 9953-9964. doi: 10.1016/j.amc.2012.03.082. Google Scholar
B. Lisena, Average criteria for periodic neural networks with delay, Discrete and Continuous Dynamical Systems - B, 19 (2014), 761-773. doi: 10.3934/dcdsb.2014.19.761. Google Scholar
T. Li, X. Yao, L. Wu and J. Li, Improved delay–dependent stability results of recurrent neural networks, Appl. Math. Comput., 218 (2012), 9983-9991. doi: 10.1016/j.amc.2012.03.013. Google Scholar
Z. Liu and L. Liao, Existence and global exponential stability of periodic solutions of cellular neural networks with time–varying delays, J. Math. Anal. Appl., 290 (2004), 247-262. doi: 10.1016/j.jmaa.2003.09.052. Google Scholar
X. Y. Lou and B. T. Cui, Novel global stability criteria for high-order Hopfield-type neural networks with time-varying delays, J. Math. Anal. Appl., 330 (2007), 144-158. doi: 10.1016/j.jmaa.2006.07.058. Google Scholar
S. Mohamad and K. Gopalsamy, Exponential stability of continuous-time and discrete-time cellular neural networks with delays, Appl. Math. Comput., 135 (2003), 17-38. doi: 10.1016/S0096-3003(01)00299-5. Google Scholar
S. Novo, R. Obaya and A. M. Sanz, Exponential stability in non-autonomous delayed equations with applications to neural networks, Discrete and Continuous Dynamical Systems - A, 18 (2007), 517-536. doi: 10.3934/dcds.2007.18.517. Google Scholar
J. H. Park, Global exponential stability of cellular neural networks with variable delays, Appl. Math. Comput., 183 (2006), 1214-1219. doi: 10.1016/j.amc.2006.06.046. Google Scholar
M. Pinto, Asymptotic equivalence of nonlinear and quasilinear differential equations with piecewise constant arguments, Math. and Comp. Model., 49 (2009), 1750-1758. doi: 10.1016/j.mcm.2008.10.001. Google Scholar
M. Pinto, Cauchy and Green matrices type and stability in alternately advanced and delayed differential systems, J. Difference Equ. Appl., 17 (2011), 235-254. doi: 10.1080/10236198.2010.549003. Google Scholar
S. M. Shah and J. Wiener, Advanced differential equations with piecewise constant argument deviations, Internat. J. Math.and Math. Sci., 6 (1983), 671-703. doi: 10.1155/S0161171283000599. Google Scholar
T. Su and X. Yang, Finite-time synchronization of competitive neural networks with mixed delays, Discrete and Continuous Dynamical Systems - B, 21 (2016), 3655-3667. doi: 10.3934/dcdsb.2016115. Google Scholar
J.-P. Tseng, Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays, Discrete and Continuous Dynamical Systems - A, 33 (2013), 4693-4729. doi: 10.3934/dcds.2013.33.4693. Google Scholar
B. Wang, S. Zhong and X. Liu, Asymptotical stability criterion on neural networks with multiple time–varying delays, Appl. Math. Comput., 195 (2008), 809-818. doi: 10.1016/j.amc.2007.05.027. Google Scholar
Z. Wang, J. Cao, Z. Cai and L. Huang, Finite-time stability of impulsive differential inclusion: Applications to discontinuous impulsive neural networks, Discrete and Continuous Dynamical Systems - B, 2020. doi: 10.3934/dcdsb.2020200. Google Scholar
J. Wiener, Differential equations with piecewise constant delays, Trends in Theory and Practice of Nonlinear Differential Equations (Arlington, Tex., 1982), Marcel Dekker, New York, 90 (1984), 547–552. Google Scholar
J. Wiener, Generalized Solutions of Functional Differential Equations, World Scientific, Singapore, 1993. doi: 10.1142/1860. Google Scholar
J. Wiener and V. Lakshmikantham, Differential equations with piecewise constant argument and impulsive equations, Nonlinear Stud., 7 (2000), 60-69. Google Scholar
B. Xu, X. Liu and X. Liao, Global exponential stability of high order Hopfield type neural networks, Appl. Math. Comput., 174 (2006), 98-116. doi: 10.1016/j.amc.2005.03.020. Google Scholar
S. Xu, Y. Chu and J. Lu, New results on global exponential stability of recurrent neural networks with time-varying delays, Phys. Lett. A, 352 (2006), 371-379. doi: 10.1016/j.physleta.2005.12.031. Google Scholar
T. H. Yu, D. Q. Cao, S. Q. Liu and H. T. Chen, Stability analysis of neural networks with periodic coefficients and piecewise constant arguments, Journal of the Franklin Institute, 353 (2016), 409-425. doi: 10.1016/j.jfranklin.2015.11.010. Google Scholar
L. Zhou and G. Hu, Global exponential periodicity and stability of cellular neural networks with variable and distributed delays, Appl. Math. Comput., 195 (2008), 402-411. doi: 10.1016/j.amc.2007.04.114. Google Scholar
Y. Zhang, D. Yue and E. Tian, New stability criteria of neural networks with interval time–varying delay: A piecewise delay method, Appl. Math. Comput., 208 (2009), 249-259. doi: 10.1016/j.amc.2008.11.046. Google Scholar
Figure 1a. Some trajectories uniformly convergent to the unique exponentially stable $\pi$/2-periodic solution of the ICNN models with IDEGPCD system (33)
Figure 1b. Phase plots of state variable ($x_1$, $x_2$, $x_3$) in the ICNN models with IDEGPCD system (33) with the initial condition (7, 6, 3)
Figure 1c. Phase plots of state variable ($x_1$, $x_2$, $x_3$) in the ICNN models with IDEGPCD system (33) with the initial condition (6.7897, 6.0565, 4.6992)
Figure 1d. Phase plots of state variable ($t$, $x_1$, $x_2$) in the ICNN models with IDEGPCD system (33)
Figure 1e. Phase plots of state variable ($t$, $x_1$, $x_3$) in the ICNN models with IDEGPCD system (33)
Figure 1f. Phase plots of state variable ($t$, $x_2$, $x_3$) in the ICNN models with IDEGPCD system (33)
Figure 2a. $\pi/2$-periodic solution of the CNN models with DEGPCD system (33a) for $t\in [0, 6\pi] $ with the initial value (4.9228, 4.5238, 3.6121)
Figure 2b. Trajectories uniformly convergent to the unique exponentially stable $\pi$/2-periodic solution of the CNN models with DEGPCD system (33a) with the initial value (5.0, 4.3, 3.65)
Figure 2c. Phase plots of state variable ($x_1$, $x_2$, $x_3$) in the CNN models with DEGPCD system (33a) with the initial condition (4.9228, 4.5238, 3.6121)
Figure 3a. Some trajectories uniformly convergent to the unique $1$-periodic solution of the ICNN models with IDEGPCD system (37)
Figure 3b. Exponential convergence of two trajectories towards a $1$-periodic solution of the ICNN models with IDEGPCD system (37). Initial conditions: ($i$) (3, 6) in red and ($ii$) (4, 6) in blue
Figure 3c. Phase plots of state variable ($t$, $x_1$, $x_2$) in the ICNN models with IDEGPCD system (37)
Figure 4a. Unique asymptotically stable solution of the CNN models with DEGPCD system (37a)
Figure 4b. Unique asymptotically stable solution of the CNN models with DEGPCD system (37a)
Figure 4c. Some trajectories uniformly convergent to the unique asymptotically stable solution of the CNN models with DEGPCD system (37a)
Ricai Luo, Honglei Xu, Wu-Sheng Wang, Jie Sun, Wei Xu. A weak condition for global stability of delayed neural networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 505-514. doi: 10.3934/jimo.2016.12.505
Junya Nishiguchi. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5657-5679. doi: 10.3934/dcds.2016048
Qiang Li, Mei Wei. Existence and asymptotic stability of periodic solutions for neutral evolution equations with delay. Evolution Equations & Control Theory, 2020, 9 (3) : 753-772. doi: 10.3934/eect.2020032
Zengyun Wang, Jinde Cao, Zuowei Cai, Lihong Huang. Finite-time stability of impulsive differential inclusion: Applications to discontinuous impulsive neural networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2677-2692. doi: 10.3934/dcdsb.2020200
Sylvia Novo, Rafael Obaya, Ana M. Sanz. Exponential stability in non-autonomous delayed equations with applications to neural networks. Discrete & Continuous Dynamical Systems, 2007, 18 (2&3) : 517-536. doi: 10.3934/dcds.2007.18.517
Benedetta Lisena. Average criteria for periodic neural networks with delay. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 761-773. doi: 10.3934/dcdsb.2014.19.761
Yong Zhao, Qishao Lu. Periodic oscillations in a class of fuzzy neural networks under impulsive control. Conference Publications, 2011, 2011 (Special) : 1457-1466. doi: 10.3934/proc.2011.2011.1457
Haoyue Song, Fanwei Meng. Some generalizations of delay integral inequalities of Gronwall-Bellman type with power and their applications. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021022
Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727
Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219
Teresa Faria, José J. Oliveira. On stability for impulsive delay differential equations and application to a periodic Lasota-Wazewska model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2451-2472. doi: 10.3934/dcdsb.2016055
Ozlem Faydasicok. Further stability analysis of neutral-type Cohen-Grossberg neural networks with multiple delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1245-1258. doi: 10.3934/dcdss.2020359
Yong Ren, Xuejuan Jia, Lanying Hu. Exponential stability of solutions to impulsive stochastic differential equations driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2157-2169. doi: 10.3934/dcdsb.2015.20.2157
Yong Ren, Huijin Yang, Wensheng Yin. Weighted exponential stability of stochastic coupled systems on networks with delay driven by $ G $-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3379-3393. doi: 10.3934/dcdsb.2018325
Eugen Stumpf. On a delay differential equation arising from a car-following model: Wavefront solutions with constant-speed and their stability. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3317-3340. doi: 10.3934/dcdsb.2017139
Jun Zhou, Jun Shen, Weinian Zhang. A powered Gronwall-type inequality and applications to stochastic differential equations. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7207-7234. doi: 10.3934/dcds.2016114
Jui-Pin Tseng. Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4693-4729. doi: 10.3934/dcds.2013.33.4693
Jan Sieber, Matthias Wolfrum, Mark Lichtner, Serhiy Yanchuk. On the stability of periodic orbits in delay equations with large delay. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 3109-3134. doi: 10.3934/dcds.2013.33.3109
István Györi, Ferenc Hartung. Exponential stability of a state-dependent delay system. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 773-791. doi: 10.3934/dcds.2007.18.773
Jifeng Chu, Pedro J. Torres, Feng Wang. Radial stability of periodic solutions of the Gylden-Meshcherskii-type problem. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 1921-1932. doi: 10.3934/dcds.2015.35.1921 | CommonCrawl |
Non-Gaussian noise spectroscopy with a superconducting qubit sensor
Intrinsic and induced quantum quenches for enhancing qubit-based quantum noise spectroscopy
Yu-Xin Wang & Aashish A. Clerk
Unimon qubit
Eric Hyyppä, Suman Kundu, … Mikko Möttönen
Quantum nonlinear spectroscopy of single nuclear spins
Jonas Meinel, Vadim Vorobyov, … J. Wrachtrup
Dynamics of a qubit while simultaneously monitoring its relaxation and dephasing
Q. Ficheux, S. Jezouin, … B. Huard
High-fidelity three-qubit iToffoli gate for fixed-frequency superconducting qubits
Yosep Kim, Alexis Morvan, … Irfan Siddiqi
Correlated charge noise and relaxation errors in superconducting qubits
C. D. Wilen, S. Abdullah, … R. McDermott
Quantum metrology with quantum-chaotic sensors
Lukas J. Fiderer & Daniel Braun
Optimal metrology with programmable quantum sensors
Christian D. Marciniak, Thomas Feldker, … Thomas Monz
The Rayleigh–Lorentz invariant for superconducting resonators and optimal adiabatic qubit-information detection
Jeong Ryeol Choi
Youngkyu Sung ORCID: orcid.org/0000-0002-1342-93541,2,
Félix Beaudoin ORCID: orcid.org/0000-0002-2453-052X3 nAff6,
Leigh M. Norris3,
Fei Yan1,
David K. Kim4,
Jack Y. Qiu1,2,
Uwe von Lüpke1,
Jonilyn L. Yoder4,
Terry P. Orlando1,2,
Simon Gustavsson1,
Lorenza Viola ORCID: orcid.org/0000-0002-8728-92353 &
William D. Oliver1,2,4,5
Quantum metrology
Accurate characterization of the noise influencing a quantum system of interest has far-reaching implications across quantum science, ranging from microscopic modeling of decoherence dynamics to noise-optimized quantum control. While the assumption that noise obeys Gaussian statistics is commonly employed, noise is generically non-Gaussian in nature. In particular, the Gaussian approximation breaks down whenever a qubit is strongly coupled to discrete noise sources or has a non-linear response to the environmental degrees of freedom. Thus, in order to both scrutinize the applicability of the Gaussian assumption and capture distinctive non-Gaussian signatures, a tool for characterizing non-Gaussian noise is essential. Here, we experimentally validate a quantum control protocol which, in addition to the spectrum, reconstructs the leading higher-order spectrum of engineered non-Gaussian dephasing noise using a superconducting qubit as a sensor. This first experimental demonstration of non-Gaussian noise spectroscopy represents a major step toward demonstrating a complete spectral estimation toolbox for quantum devices.
For any dynamical system that evolves in the presence of unwanted disturbances, precise knowledge of the noise spectral features is fundamental for quantitative understanding and prediction of the dynamics under realistic conditions. As a result, spectral estimation techniques have a long tradition and play a central role in classical statistical signal processing1. For quantum systems, the importance of precisely characterizing noise effects is further heightened by the challenge of harnessing the practical potential that quantum science and technology applications promise. Such detailed knowledge is key to develop noise-optimized strategies for enhancing quantum coherence and boosting control fidelity in near-term intermediate-scale quantum information processors2, as well as for overcoming noise effects in quantum metrology3,4. Ultimately, probing the extent and decay of noise correlations will prove crucial in determining the viability of large-scale fault-tolerant quantum computation5.
Thanks to their exquisite sensitivity to the surrounding environment, qubits driven by external control fields are naturally suited as "spectrometers", or sensors, of their own noise6,7. Quantum noise spectroscopy (QNS) leverages the fact that open-loop control modulation is akin to shaping the filter function (FF) that determines the sensor's response in frequency space8,9,10,11,12 and, in its simplest form, aims to characterize the spectral properties of environmental noise as sensed by a single qubit sensor. By now, QNS protocols employing both pulsed and continuous control modalities have been explored, and experimental implementations have been reported across a wide variety of qubit platforms—including nuclear spins13, superconducting quantum circuits14,15,16,17, semiconductor quantum dots18,19,20,21, diamond nitrogen vacancy centers22,23, and trapped ions24. Notably, knowledge of the underlying noise spectrum has already enabled unprecedented coherence times to be achieved via tailored error suppression25.
While the above advances clearly point to the growing significance of spectral estimation in the quantum setting, they all rely on the assumption that the target noise process is Gaussian—that is, one- and two-point correlation functions suffice to fully specify the noise statistical properties. However, the Gaussian assumption needs not be justified a priori and it should rather be validated (or falsified) by the QNS protocol itself. A number of realistic scenarios motivate the consideration of non-Gaussian noise regimes. Statistical processes that are responsible for electronic current fluctuations in mesoscopic devices or the 1/f noise ubiquitously encountered in solid-state quantum devices are not Gaussian in general26. In superconducting circuits, previous studies have shown that a few two-level defects within Josephson tunnel junctions can interact strongly with the qubit27,28,29,30,31, the resulting decoherence dynamics showing marked deviations from Gaussian behavior under both free evolution and dynamical decoupling protocols7,32,33. More generally, non-Gaussian noise statistics may be expected to arise whenever a qubit is operated outside a linear-response regime, either due to strong coupling to a discrete environment34 or to a non-linear energy dispersion relationship. The latter feature, which has long been appreciated to influence dephasing behavior at optimal points35, is common to all state-of-the-art superconducting qubit archetypes36,37,38,39. Thus, statistical correlations higher than second order and their corresponding multi-dimensional Fourier transforms must be taken into account for complete characterization. From a signal-processing standpoint, this translates into the task of higher-order spectral estimation40.
In this work, we experimentally demonstrate non-Gaussian QNS by building on the estimation procedure proposed by Norris et al.41. While we employ a flux-tunable superconducting qubit as a sensor, our methodology is portable to other physical testbeds in which classical dephasing noise is the dominant decoherence mechanism. We show how non-Gaussianity distinctively modifies the phase evolution of the sensor's coherence, resulting in an observable signature to which the spectrum (or power spectral density, PSD) is completely insensitive and which is instead encoded in the leading higher-order spectrum, the bispectrum. Unlike the original proposal41, the QNS protocol we introduce here makes use of a statistically motivated maximum likelihood approach. This renders the estimation less susceptible to numerical instability, while allowing measurement errors to be incorporated and both the PSD and the bispectrum to be inferred using a single measurement setup. In order to obtain a clean benchmark for our spectral estimation procedure, we engineer a non-Gaussian noise model by injecting Gaussian flux at the sensor's degeneracy point, resulting in non-Gaussian frequency noise. The noise implementation is validated by verifying the observed power dependence of the leading cumulants against the expected one. Both the reconstructed PSD and the bispectrum are found to be in quantitative agreement with theoretical predictions within error bars.
Non-Gaussian dephasing noise
Before introducing our experimental test bed, we present the general setting to which our analysis is relevant: a qubit sensor evolving under the combined action of non-Gaussian classical dephasing noise and suitably designed sequences of control pulses. By working in an interaction frame with respect to the internal qubit Hamiltonian and the applied control, and letting ħ = 1, the controlled open-system Hamiltonian may be written as H(t) = yp(t)B(t)σz/2, where B(t) is a stochastic process describing dephasing noise relative to the qubit's eigenbasis defined by the Pauli operator σz. The control switching function yp(t) accounts for a sequence p of instantaneous π rotations about the x or y axis, starting from initial value yp(0) = +1 and toggling between ±1 with every application of a pulse. Under such a pure-dephasing Hamiltonian, the qubit coherence is quantified by the time-dependent expectation value 〈σ+(t)〉 ≡ e−χ(t)+iϕ(t)〈σ+(0)〉, where the influence of the noise is captured by the decay and phase parameters χ(t) and ϕ(t). These parameters may be formally expanded in terms of noise cumulants, C(k)(t1, …, tk), k ∈ {1, 2, …, ∞}, with χ(t) taking contribution only from even cumulants and ϕ(t) only from odd cumulants41. Physically, the kth-order cumulant is determined by the multi-time correlation functions \({\mathbb{E}}\)[B(t1), …, B(tj)], with j ≤ k, where \({\mathbb{E}}\)[⋅] denotes the ensemble average over noise realizations.
Since the statistical properties of Gaussian noise are entirely determined by one- and two-point correlation functions, cumulants of order k ≥ 3 vanish identically. By contrast, for non-Gaussian noise, all cumulants can be non-zero in principle. Assuming that noise is stationary, so that the mean of the process \({\mathbb{E}}\)[B(t)] = C(1)(0) ≡ μB is constant, the phase parameter may be written as ϕ(t) = μBFp(0, t) + φ(t), with the Fourier transform \(F_p(\omega ,t) \equiv {\int}_0^t {\mathrm{d}}s{\mkern 1mu} {\mathrm{e}}^{ - i\omega s}y_p(s)\) being the fundamental FF associated to the control12. This expression separates the phase due to the noise mean, which arises for both Gaussian and non-Gaussian noise, from a genuinely non-Gaussian phase φ(t), which captures the contribution of all odd noise cumulants with k ≥ 3. For sufficiently small time or noise strength, we can neglect terms of order k > 3 in the cumulant expansion, leading to
$$\chi (t) \approx \frac{1}{{2\pi }}\int_{\Bbb R} {\mathrm{d}} \omega |F_p(\omega ,t)|^2S(\omega ),$$
$$\varphi (t) \approx - \frac{1}{{3!(2\pi )^2}}\int_{{\Bbb R}^2} {\mathrm{d}} \vec \omega {\mkern 1mu} G_p(\vec \omega ,t)S_2(\vec \omega ),$$
where \(\vec \omega \equiv (\omega _1,\omega _2)\) and the second and third noise cumulants enter the qubit dynamics through their Fourier transforms: the PSD or spectrum, \(S(\omega ) \equiv {\int}_{\Bbb R} {\mathrm{d}} \tau {\mkern 1mu} {\mathrm{e}}^{ - i\omega \tau }C^{(2)}(0,\tau )\), and the second-order polyspectrum or bispectrum, \(S_2(\vec \omega ) \equiv {\int}_{{\Bbb R}^2} {\mathrm{d}} \vec \tau {\mkern 1mu} {\mathrm{e}}^{ - i\vec \omega \cdot \vec \tau }C^{(3)}(0,\tau _1,\tau _2)\), with \(\vec \tau \equiv (\tau _1,\tau _2)\). In the frequency domain, the influence of such spectra is "filtered" by a corresponding generalized FF—in particular, \(G_p(\vec \omega ,t) \equiv F_p( - \omega _1,t)F_p( - \omega _2,t)F_p(\omega _1 + \omega _2,t)\)12. Since, to leading order, non-Gaussian features arise in our setting from \(S_2(\vec \omega )\), non-Gaussianity of a noise process will be detected and characterized through measurements of φ(t).
Experimental setup and noise validation
Our circuit quantum electrodynamics (QED) system42,43 contains an engineered flux qubit44, which is designed to enable fast single-qubit gates with high fidelity at its flux degeneracy point (Fg > 99.9%; see Supplementary Notes 1 and 2). Single-qubit operations are performed using cosine-shaped microwave pulses, applying an optimal-control technique to suppress leakage to higher levels45. Inductive coupling to a local antenna is used to modulate the external flux Φ threading the qubit loop interrupted by Josephson junctions (Fig. 1a, b). Near the degeneracy (or optimal35) point Φ = Φ0/2, with Φ0 the flux quantum, the |0〉 → |1〉 transition frequency ωq has an approximately quadratic dependence on the external flux Φ (Fig. 1c). Hence, a sufficiently slow time-dependent external flux Φ(t) enables adiabatic modulation of the qubit frequency, leading to
$$B(t) = \beta _\Phi {\mkern 1mu} [\Delta \Phi (t)]^2, \qquad \Delta \Phi (t) \equiv \Phi (t) - \Phi _0/2,$$
where βΦ is the quadratic coefficient in the dispersion relation between qubit frequency and flux. Crucially, any non-linear function of a Gaussian process leads to non-Gaussian noise. In particular, the quadratic function implemented in Eq. (3) transduces zero-mean Gaussian flux noise into non-Gaussian qubit-frequency noise (Fig. 1c, d). Assuming that the noise is entirely contributed by the applied ΔΦ(t), and that SΦ(ω) denotes the corresponding PSD, the mean μB, PSD S(ω), and bispectrum S2(ω1, ω2) of B(t) are, respectively, given by
$$\mu _B = \frac{{\beta _\Phi }}{{2\pi }}\int_{\Bbb R} {\mathrm{d}}\omega {\mkern 1mu} S_\Phi (\omega ),$$
$$S(\omega ) = \frac{{\beta _\Phi ^2}}{\pi }\int_{\Bbb R} {\mathrm{d}}{u}{\mkern 1mu} S_\Phi (u)S_\Phi (\omega - u),$$
$$S_2(\omega _1,\omega _2) = \frac{{4\beta _\Phi ^3}}{\pi }\int_{\Bbb R} {\mathrm{d}}u{\mkern 1mu} S_\Phi (u)S_\Phi (\omega _1 + u)S_\Phi (\omega _2 - u).$$
In the experiment, we choose SΦ(ω) to be a zero-mean Lorentzian function, SΦ(ω) = (P0/πωc)/[1 + (ω/ωc)2], where ωc/2π (=0.5 MHz) and P0 denote the cutoff frequency and the power of the applied flux noise, respectively. As is apparent from Eqs. (4)–(6), cumulants of order k = 1, 2, and 3 are distinguished by their linear, quadratic, and cubic dependence on power, respectively.
Experimental setup and non-Gaussian dephasing noise in a superconducting qubit. a Schematic of the circuit QED system. An engineered flux qubit comprises a superconducting loop (blue) interrupted by one small-area and eight large-area Josephson junctions (crosses) and is inductively coupled to a local antenna (red). The qubit junctions have internal capacitance, C and αC, and are externally shunted by capacitance Csh. See Supplementary Note 1. b SEM image of the device. The flux threading the qubit loop Φ is modulated by applying a current through the local antenna. c Frequency spectroscopy of the qubit's |0〉 → |1〉 transition. At (away from) the degeneracy point Φ = Φ0/2, the qubit frequency ωq has a quadratic (linear) dependence on the external flux, as indicated by the indigo (yellow) arrow. d Probability distribution of the qubit frequency under Gaussian flux noise in the linear regime (yellow) vs. the quadratic regime (indigo). In the quadratic regime, the right-skewness of the distribution illustrates the non-Gaussianity of the resulting noise process
We first validate the intended engineered non-Gaussian noise by demonstrating consistency of the measured power dependence of χ and ϕ with the above prediction. The qubit is initialized to the +y axis by applying a π/2 pulse about x (rotation Rx(π/2)), and Gaussian flux noise is injected while it evolves in the xy plane of the Bloch sphere for time T. During this evolution, we apply a Carr–Purcell–Meiboom–Gill (CPMG) sequence consisting of two refocusing π pulses about y (Fig. 2a). At the end of this sequence (t = T), the effect of the first cumulant of the noise cancels out (Fp(0, T) = 0) and, as a result, the measured phase becomes solely determined by odd cumulants of order k ≥ 3: ϕ(T) = φ(T). To estimate both ϕ and χ, we measure 〈σx〉 and 〈σy〉 by applying appropriate tomography pulses at time t = T, before readout in the σz-basis.
Power dependence of decay constant (χ) and phase angle (ϕ). a Pulse scheme for measuring the power dependence of χ and ϕ, consisting of a CPMG sequence of length T = 1 μs with two π pulses. Flux noise waveforms are temporally tailored to affect the qubit only while it evolves on the transverse plane. b Decay constant \(\chi = - \log \left( {\sqrt {\left\langle {\sigma _x} \right\rangle ^2 + \left\langle {\sigma _y} \right\rangle ^2} } \right)\) and c phase angle ϕ = tan−1(−〈σx〉/〈σy〉) at time t = T, after application of a CPMG sequence as a function of the applied noise power P0. A cubic power dependence of ϕ, for sufficiently weak noise, corroborates non-Gaussianity of the engineered noise. Error bars represent 95% confidence intervals
Figure 2b, c shows χ and ϕ as a function of injected flux noise power P0 for both the experiment (blue triangles) and Monte Carlo simulations accounting for all cumulants of the applied noise (orange squares, see Supplementary Note 5). Substituting Eqs. (5) and (6) into Eqs. (1) and (2), we also plot the resulting ideal weak-power behavior (gray solid) considering only the leading-order cumulants of order two and three for χ and ϕ, respectively. For sufficiently small P0, these ideal values are in good agreement with data from both experiment and simulation, showing that χ and ϕ obey the quadratic and cubic power dependences that are expected for the square of a Gaussian flux-noise process under the CPMG sequence. In particular, the cubic dependence of ϕ at small P0 corroborates the presence of a non-zero third-order cumulant, which would not exist for Gaussian noise. Deviations of the simulations and experimental data from the ideal behavior at large P0 are attributable to the contribution of cumulants of order k > 3. The quantitative agreement between theory, experiment, and simulation observed at low power demonstrates our capability to produce and sense engineered noise that dominates over native one over the relevant parameter regime and exhibits well-controlled cumulants, a necessary first step in the experimental validation of non-Gaussian QNS.
Non-Gaussian noise spectroscopy
Having established that χ and ϕ follow their expected behavior, we move on to fully characterizing the first three cumulants of our engineered noise source by measuring its mean, PSD, and bispectrum. Since the noise mean, μB, manifests itself through a qubit-frequency shift, it can be measured from a simple parameter estimation scheme based on Ramsey interferometry. By contrast, we aim to perform a non-parametric estimation of both the PSD and bispectrum, that is, to reconstruct them at a set of discrete points in frequency space without assuming a prior functional form. Figure 3 illustrates our protocol for simultaneous estimation of the PSD and bispectrum, in which filter design—the selection of pulse times in a control sequence so that the corresponding FF has a particular shape—is instrumental. Building on ref. 13, applying M ≫ 1 repetitions of a "base" pulse sequence p ∈ {1, 2, ⋯, P}, with duration T, shapes the FF |Fp(ω, MT)|2 into a frequency comb with narrow teeth probing S(ω) at harmonics kωh, with k an integer and ωh ≡ 2π/T (Fig. 3a, b). This result generalizes to filters relevant to higher-order spectra12: under sequence repetition, \(G_p(\vec \omega ,MT)\) becomes a two-dimensional (2D) "hyper-comb" with teeth probing \(S_2(\vec \omega )\) at \(\vec \omega \in \{ \vec k\omega _h\}\), where \(\vec k \equiv (k_1,k_2)\) with k1 and k2 integers (Fig. 3d).
A protocol for non-Gaussian noise spectroscopy. a Timing diagrams of control pulse sequences. The length of the base sequence is T = 960 ns, p = 1 corresponds to a single free-evolution period, whereas sequences p = 2, …, 11 are repeated M = 10 times. Only π-pulses are shown and all π-pulses are around the y axis (see Supplementary Note 4 for details). b |Fp(ω, MT)|2 for p = 3, 4, 5 as a function of angular frequency ω. c Symmetries of the bispectrum of a classical stationary noise process. d 2D grid representing the harmonic frequencies (black circles) in the principal domain \({\cal{D}}_{\mathrm{2}}\) (orange area) in which the bispectrum is sampled. The amplitude of the relevant contribution of the FF in \({\cal{D}}_{\mathrm{2}}\), \(|{\mathrm{Re}}[G_p(\vec \omega ,MT)]|\), for p = 2, (red surface plot) is shown on the top of the grid
For both the PSD and bispectrum, distinct pulse sequences have the effect of giving different weights to the comb teeth, granting access to complementary information about S(kωh) and \(S_2(\vec k\omega _h)\), enabling their reconstruction. More specifically, in both cases, the basic steps of our protocol consist of (i) applying a set of sufficiently distinct pulse sequences p (Fig. 3a); (ii) measuring the corresponding decay and phase parameters; and (iii) solving the resulting systems of linear equations, which give χp(MT) and φp(MT) as a function of S(kωh) and \(S_2(\vec k\omega _h)\). Since classical noise has a spectrum with even symmetry, S(ω) = S(−ω), the PSD is specified across all frequency space by its values at positive frequencies. Likewise, the bispectrum is completely specified by its values over a subspace \({\cal{D}}_2\) known as the principal domain41,46, illustrated in Fig. 3c. Reconstructing the bispectrum over \({\cal{D}}_{\mathrm{2}}\) and exploiting the symmetries that \(S_2(\vec \omega )\) exhibits (shown in Fig. 3c) thus suffices to retrieve the bispectrum over the whole relevant frequency domain.
Figure 4 presents experimental results for determining the mean and PSD, which suffice to characterize the noise process in the Gaussian approximation. To measure μB by Ramsey interferometry, we apply a pair of π/2 pulses with a drive at frequency ωd, first about x at time t = 0 (Rx(π/2)), and then about y at time t = T (Ry(π/2)). We choose a pulse interval T = 50 ns, which is short enough for cumulants of order higher than one to be negligible, but long enough to avoid pulse overlap. The qubit polarization at time tf after the two pulses is then 〈σz(tf)〉 ≈ (D + μB)T′, where D ≡ ωq − ωd is the drive detuning, and T′ is an effective time interval that accounts for the finite-width pulse shape (see Supplementary Note 6). Thus, plotting 〈σz(tf)〉 as a function of D produces a straight line whose x-intercept is −μB, leading to an estimate that is insensitive to the pulse shape to first order in the cumulant expansion. Figure 4a presents data for measurements of 〈σz(tf)〉, and shows how we isolate the contribution of the engineered noise source by performing the sequence with (blue data set) and without (black data set) applied noise. The mean of the engineered noise is estimated by subtracting the x-intercepts of the straight lines that are fitted to each data set. Performing these fits under the conditional normal model of linear regression (see Supplementary Note 6) yields the estimate \(\mu _B^{{\mathrm{est}}}/2\pi = 127.1 \pm 7.56\) kHz, where the uncertainty corresponds to the 95% confidence interval calculated from the asymptotic normal distribution of qubit polarization.
Gaussian spectral estimation: noise mean and PSD. a Measured values of 〈σz〉 after a 50-ns-long Ramsey sequence vs. drive detuning D = ωq − ωd. The separation between the x-intercepts of the two fitted lines gives the mean \(\mu _B^{{\mathrm{est}}}\) of the injected dephasing noise. b Comparison of the experimental reconstruction (blue triangle) and Monte Carlo simulation (orange square) with the ideal PSD (gray solid line). c Decay constants χ. Except for p = 1, the ideal data (gray circles) are in very good agreement with both the experimental results and Monte Carlo simulations. Error bars represent 95% confidence intervals
To estimate the PSD by the comb approach outlined above, we use both a period of free evolution (p = 1) and M = 10 repetitions of base sequences p = 2, …, 11 illustrated in Fig. 3a (see Supplementary Note 4 for the actual pulse times). For M ≫ 1, the FF entering the decay constant in Eq. (1) becomes approximately \(|F_p(\omega ,MT)|^2 \approx \frac{M}{T}|F_p(\omega ,T)|^2\mathop {\sum}\limits_{k = - \infty }^\infty \delta (\omega - k\omega _h)\), which enables us to sample the PSD at the harmonic frequencies in terms of the (known) control FFs,
$$\chi _p(MT) \approx \frac{M}{T}\mathop {\sum}\limits_{k \in {\cal{K}}_1} {|F_p(k\omega _h,T)|^2} S(k\omega _h).$$
Here, we have used the even symmetry of the PSD, and the high-frequency decay of the PSD and FFs to truncate the comb to a finite set of positive harmonics, \({\cal{K}}_{\mathrm{1}}\) ≡ {0, …, K − 1}. Rather than solving the above linear system by matrix inversion as in ref. 13, we employ a statistically motivated maximum-likelihood estimate (MLE), which takes experimental error into account (see Supplementary Note 7). Using measurements of χp(MT) for each of the same P = 11 control sequences to be used for the bispectrum estimation, we find a well-conditioned system for K = 8.
Figure 4b compares the experimentally estimated PSD at the K = 8 harmonics (blue triangles) with the ideal PSD obtained from Eq. (5) for our engineered noise (solid gray line) and Monte Carlo simulations of the QNS protocol (orange squares). The experimental and simulated estimates of the PSD are plotted along with 95% confidence intervals obtained from the asymptotic normal distribution of the decay constants. Figure 4c shows the experimental and simulated values of χp(MT) that were used as input for the reconstructions, along with ideal values obtained by substituting Eq. (5) into Eq. (1) and approximating the FF by the ideal (infinite) comb as given above. The PSD is slightly underestimated at zero frequency in both the experiment and Monte Carlo simulation since the FF of sequence p = 1 (a 960-ns-long free induction decay) is comparable in bandwidth to the PSD, whereas the reconstruction procedure assumes the PSD is sampled by infinitely narrow FFs. The disagreement of the experimental and simulated χp(MT) for p = 1 with the ideal value is also explained by the non-negligible bandwidth of the FF (Fig. 4c). Apart from these well-understood discrepancies at ω = 0, the quantitative agreement of the experimental reconstruction with simulations and ideal values is remarkable, which demonstrates that our protocol is able to reliably characterize Gaussian features of the applied noise.
We are now in a position to present our key result: the reconstruction of the noise bispectrum. As anticipated, this entails a higher-dimensional analog of the comb-based approach used for the PSD. We estimate the non-Gaussian phase given in Eq. (2) by subtracting the contribution of the noise mean from the total measured phase, φp(MT) = ϕp(MT) − μBFp(0, MT), where we replace μB by \(\mu _B^{{\mathrm{est}}}\) experimentally determined above. After M ≫ 1 repetitions of sequence p, the FF becomes a 2D comb (Fig. 3d), and the non-Gaussian phase becomes a sampling of the bispectrum at the harmonics \(\vec k\omega _h\), that is, \(\varphi _p(MT) = - \frac{M}{{3!T^2}}\mathop {\sum}\limits_{\vec k \in {\Bbb Z}^2} {G_p} (\omega _h\vec k,T)S_2(\omega _h\vec k{\mkern 1mu} ).\) Since both the filter and bispectrum decay at high frequencies, we can truncate this sum to a finite number of \(\vec k = (k_1,k_2)\). As the bispectrum is completely specified by its values on the principal domain, we may further restrict our consideration to a subset of harmonics, \({\cal{K}}_2 \equiv \{ \vec k_1, \ldots ,\vec k_N\} \subset {\cal{D}}_2\) (Fig. 3d). The non-Gaussian phase then becomes
$$\varphi _p(MT) = - \frac{M}{{3!T^2}}\mathop {\sum}\limits_{\vec k \in {\cal{K}}_2} m (\omega _h\vec k){\mkern 1mu} {\mathrm{Re}}[G_p(\omega _h\vec k,T)]S_2(\omega _h\vec k{\mkern 1mu} ),$$
where the multiplicity \(m(\omega _h\vec k)\) accounts for the number of points equivalent to \(S_2(\omega _h\vec k)\) by the symmetry properties of the bispectrum. Also on account of these symmetries, the imaginary component of \(G_p(\omega _h\vec k,T)\) cancels when the sum is restricted to \({\cal{D}}_2\) (see Supplementary Note 8).
By measuring the non-Gaussian phase for P ≥ N different control sequences, we can construct a vector \(\vec \varphi = [\varphi _1(MT), \ldots ,\varphi _P(MT)]^T\) and a linear system of the form
$$\vec \varphi = {\mathbf{A}}\vec S_2,\;\;\;{\mathbf{A}}_{pn} = - \frac{M}{{3!T^2}}{\mkern 1mu} m(\omega _h\vec k_n){\mkern 1mu} {\mathrm{Re}}[G_p(\omega _h\vec k_n,T)],$$
where \(\vec S_2 = [S_2(\omega _h\vec k_1), \ldots ,S_2(\omega _h\vec k_N)]^T\) contains the bispectrum at the harmonics in \({\cal{K}}_2\) and A is a P × N reconstruction matrix. The simplest way to estimate the bispectrum from this linear system is the least-squares estimate employed in ref. 41, involving the (pseudo-)inverse of the reconstruction matrix, \(\vec S_2^{{\mathrm{est}}} = {\mathbf{A}}^{ - 1}\vec \varphi\). As in the case of PSD estimation, a potential drawback of this inversion-based approach is numerical instability stemming from an ill-conditioned A, which occurs when the FFs have a high degree of spectral overlap. Since ill-conditioning makes the least-squares estimate sensitive to even small errors in the measured phases, we again utilize a maximum-likelihood approach with optional regularization to further increase stability (see Supplementary Note 8). From the asymptotic Gaussian distribution of the measurement outcomes of \(\vec \varphi\), the regularized maximum-likelihood estimate (RMLE) is found as
$$\vec S_2^{{\kern 1pt} {\mathrm{RMLE}}} = \mathop {{{\mathrm{argmin}}}}\limits_{\vec S_2} \left[ {\frac{1}{2}({\mathbf{A}}\vec S_2 - \vec \varphi )^T{\mathbf{\Sigma }}^{ - 1}({\mathbf{A}}\vec S_2 - \vec \varphi ) + \left\| {\lambda {\mathbf{D}}\vec S_2} \right\|_2^2} \right],$$
where ||⋅||2 denotes the L2-norm and λ ≥ 0 parametrizes the strength of the regularization47. Due to its dependence on the covariance matrix Σ, the RMLE down-weights phase measurements with larger error. Numerical stability is increased by the regularizer \(\left\| {\lambda {\mathbf{D}}\vec S_2} \right\|_2^2\), which acts as an effective constraint. When the smoothing matrix D is proportional to I, the regularizer reduces to the well-known Tikhonov (or L2) form. Since the numerical stability afforded by regularization comes at the cost of additional bias, choosing the regularization strength is a nontrivial task. In Supplementary Note 8, we detail how we have selected λ based on the so-called "L-curve criterion". Interestingly, since A is sufficiently well-conditioned for the sequences we have chosen, we find that regularization gives negligible benefit. Accordingly, we use λ = 0 (which recovers standard MLE) in our experimental reconstructions.
Figure 5a compares the results of the non-Gaussian spectral estimation for the harmonics in the principal domain for the experiment (blue triangles) with both the ideal bispectrum obtained from Eq. (6) (gray circles) and from Monte Carlo simulations (orange squares). To estimate the experimental bispectrum, we input the measured data for \(\vec \varphi\) and Σ shown in Fig. 5b into \(\vec S_2^{\,{\mathrm{RMLE}}}\) given by Eq. (10). The ideal values of φp, also shown in Fig. 5b, are obtained by substituting Eq. (6) into Eq. (2). We further display 3D representations of the full bispectra, obtained by applying relevant symmetries to the data on \({\cal{D}}_{\mathrm{2}}\), for the ideal (Fig. 5c) and experimental (Fig. 5d) cases, respectively. Ignoring error bars, the reconstructed bispectrum appears to be an overestimate with respect to the ideal one. This error may be attributed to noise during the finite-duration control pulses used in the experiment, leading to effective pulse infidelity. Upon taking the error bars in Fig. 5a into consideration, however, the ideal and simulated values of the bispectrum lie within the 95% confidence intervals of the experimental reconstruction, suggesting that this estimation error is statistically insignificant and thus successfully extending the validation of our QNS protocol to the leading non-Gaussian noise cumulant.
Non-Gaussian spectral estimation: noise bispectrum. a Experimental data (blue triangles), Monte Carlo simulations (orange squares), and ideal values (gray circles) for the bispectrum of the engineered dephasing noise. The error bars indicate that the experimental bispectrum agrees with both the ideal bispectrum and Monte Carlo simulations of the protocol within 95% confidence intervals. b Estimated non-Gaussian phase angles φ. Error bars represent 95% confidence intervals. c 3D visualization of the ideal bispectrum. d 3D visualization of the reconstructed bispectrum for the experimental data
Although the theoretical bispectrum falls within the 95% confidence interval of the estimate, reducing the magnitude of uncertainties is clearly necessary to push the application of non-Gaussian QNS to uncontrollable native noise, whose strength may be comparatively weak. We note that the spectral characterization of the non-Gaussian noise process engineered in this experiment requires an extremely precise estimation of μB. Since reconstructions of the bispectrum are obtained using φp(MT) = ϕp(MT) − μBFp(0, MT), the uncertainty in \(\mu _B^{{\mathrm{est}}}\) propagates to φp(MT) when p has zero filter order, i.e. Fp(0, MT) ≠ 0. These sequences play a crucial role in estimating the bispectrum at the "zero points", grid points (ω1, ω2) with ω1 = 0 or ω2 = 0. Since μB is much larger than the third cumulant for the current noise process, even a small relative uncertainty in \(\mu _B^{{\mathrm{est}}}\) can lead to greater error in the bispectrum estimate at the zero points, as the error bars in Fig. 5a attest.
In summary, we experimentally demonstrated high-order spectral estimation in a quantum system. By producing and sensing engineered noise with well-controlled cumulants, we were able to successfully validate a spectroscopy protocol that reconstructs both the PSD and the bispectrum of non-Gaussian dephasing noise. Our theory and experimental demonstration lay the groundwork for future research aiming at complete spectral characterization of realistic non-Gaussian noise environments in quantum devices and materials. Theoretically, we expect that the regularized maximum-likelihood estimation approach to QNS we invoked here will prove crucial to ensure stable spectral reconstructions in more general settings. Devising alternative estimation protocols based on optimally band-limited control modulation and multitaper techniques48 appears especially compelling, in view of recent advances in the Gaussian regime24,49. We believe that obtaining a complete spectral characterization will ultimately provide deeper insight into the physics and interplay of different microscopic noise mechanisms, including non-classical non-Gaussian noise, as possibly arising from photon-number-mediated non-linear couplings50.
The data that support the findings of this study may be made available from the corresponding authors upon request and with the permission of the US Government sponsors who funded the work.
The code used for the analyses may be made available from the corresponding authors upon request and with the permission of the US Government sponsors who funded the work.
Percival, D. B. & Walden, A. T. Spectral Analysis for Physical Applications (Cambridge University Press, Cambridge, UK 1993).
Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2, 79 (2018).
Sekatski, P., Skotiniotis, M. & Dür, W. Dynamical decoupling leads to improved scaling in noisy quantum metrology. New J. Phys. 18, 073034 (2016).
Beaudoin, F., Norris, L. M. & Viola, L. Ramsey interferometry in correlated quantum noise environments. Phys. Rev. A 98, 020102(R) (2018).
Preskill, J. Sufficient condition on noise correlations for scalable quantum computing. Quantum Inf. Comput. 13, 181–194 (2013).
Schoelkopf, R. J., Clerk, A. A., Girvin, S. M., Lehnert, K. W. & Devoret, M. H. in Quantum Noise in Mesoscopic Physics, NATO Science Series, (ed. Y. V. Nazarov) Vol. 7. 175–203 (Springer, Dordrecht, 2002).
Faoro, L. & Viola, L. Dynamical suppression of 1/f noise processes in qubit systems. Phys. Rev. Lett. 92, 117905 (2004).
Cywiński, L., Lutchyn, R. M., Nave, C. P. & Das Sarma, S. How to enhance dephasing time in superconducting qubits. Phys. Rev. B 77, 174509 (2008).
Biercuk, M. J. et al. Optimized dynamical decoupling in a model quantum memory. Nature 458, 996–1000 (2009).
Yuge, T., Sasaki, S. & Hirayama, Y. Measurement of the noise spectrum using a multiple-pulse sequence. Phys. Rev. Lett. 107, 170504 (2011).
Young, K. C. & Whaley, K. B. Qubits as spectrometers of dephasing noise. Phys. Rev. A 86, 012314 (2012).
Paz-Silva, G. A. & Viola, L. General transfer-function approach to noise filtering in open-loop quantum control. Phys. Rev. Lett. 113, 250501 (2014).
Álvarez, G. A. & Suter, D. Measuring the spectrum of colored noise by dynamical decoupling. Phys. Rev. Lett. 107, 230501 (2011).
Bylander, J. et al. Noise spectroscopy through dynamical decoupling with a superconducting flux qubit. Nat. Phys. 7, 565–570 (2011).
Yan, F. et al. Rotating-frame relaxation as a noise spectrum analyser of a superconducting qubit undergoing driven evolution. Nat. Commun. 4, 2337 (2013).
Yoshihara, F. et al. Flux qubit noise spectroscopy using Rabi oscillations under strong driving conditions. Phys. Rev. B 89, 020503(R) (2014).
Quintana, C. M. et al. Observation of classical-quantum crossover of 1/f flux noise and its paramagnetic temperature dependence. Phys. Rev. Lett. 118, 057702 (2017).
Dial, O. E. et al. Charge noise spectroscopy using coherent exchange oscillations in a singlet-triplet qubit. Phys. Rev. Lett. 110, 146804 (2013).
Muhonen, J. T. et al. Storing quantum information for 30 seconds in a nanoelectronic device. Nat. Nanotech. 9, 986–991 (2014).
Chan, K. W. et al. Assessment of a silicon quantum dot spin qubit environment via noise spectroscopy. Phys. Rev. Appl. 10, 044017 (2018).
Yoneda, J. et al. A quantum-dot spin qubit with coherence limited by charge noise and fidelity higher than 99.9%. Nat. Nanotech. 13, 102–107 (2018).
Meriles, C. A. et al. Imaging mesoscopic nuclear spin noise with a diamond magnetometer. J. Chem. Phys. 133, 124105 (2010).
Romach, Y. et al. Spectroscopy of surface-induced noise using shallow spins in diamond. Phys. Rev. Lett. 114, 017601 (2015).
Frey, V. M. et al. Application of optimal band-limited control protocols to quantum noise sensing. Nat. Commun. 8, 2189 (2017).
Wang, Y. et al. Single-qubit quantum memory exceeding ten-minute coherence time. Nat. Photon. 11, 646–650 (2017).
Paladino, E., Galperin, Y. M., Falci, G. & Altshuler, B. L. 1/f noise: implications for solid-state quantum information. Rev. Mod. Phys. 86, 361–418 (2014).
Simmonds, R. W. et al. Decoherence in Josephson phase qubits from junction resonators. Phys. Rev. Lett. 93, 077003 (2004).
Oliver, W. D. & Welander, P. B. Materials in superconducting quantum bits. MRS Bull. 38, 816–825 (2013).
Zaretskey, V., Suri, B., Novikov, S., Wellstood, F. C. & Palmer, B. S. Spectroscopy of a Cooper-pair box coupled to a two-level system via charge and critical current. Phys. Rev. B 87, 174522 (2013).
Lisenfeld, J. et al. Observation of directly interacting coherent two-level systems in an amorphous material. Nat. Commun. 6, 6182 (2015).
Lisenfeld, J. et al. Decoherence spectroscopy with individual two-level tunneling defects. Sci. Rep. 6, 23786 (2016).
Falci, G., D'Arrigo, A., Mastellone, A. & Paladino, E. Dynamical suppression of telegraph and 1/f noise due to quantum bistable fluctuators. Phys. Rev. A 70, 040101 (2004).
Galperin, Y. M., Altshuler, B. L., Bergli, J., Shantsev, D. & Vinokur, V. Non-Gaussian dephasing in flux qubits due to 1/f noise. Phys. Rev. B 76, 064531 (2007).
Kotler, S., Akerman, N., Glickman, Y. & Ozeri, R. Nonlinear single-spin spectrum analyzer. Phys. Rev. Lett. 110, 110503 (2013).
Makhlin, Y. & Shnirman, A. Dephasing of solid-state qubits at optimal points. Phys. Rev. Lett. 92, 178301 (2004).
Barends, R. et al. Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature 508, 500–503 (2014).
Yan, F. et al. The flux qubit revisited to enhance coherence and reproducibility. Nat. Commun. 7, 12964 (2016).
Hutchings, M. D. et al. Tunable superconducting qubits with flux-independent coherence. Phys. Rev. Appl. 8, 044003 (2017).
Lin, Y.-H. et al. Demonstration of protection of a superconducting qubit from energy decay. Phys. Rev. Lett. 120, 150503 (2018).
Nikias, C. L. & Mendel, J. M. Signal processing with higher-order spectra. IEEE Signal Process. Mag. 10, 10–37 (1993).
Norris, L. M., Paz-Silva, G. A. & Viola, L. Qubit noise spectroscopy for non-Gaussian dephasing environments. Phys. Rev. Lett. 116, 150503 (2016).
Blais, A., Huang, R. S., Wallraff, A., Girvin, S. M. & Schoelkopf, R. J. Cavity quantum electrodynamics for superconducting electrical circuits: an architecture for quantum computation. Phys. Rev. A 69, 062320 (2004).
Wallraff, A. et al. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature 431, 162–167 (2004).
Yan, F. et al. Principles for optimizing generalized superconducting flux qubit design. (2019) (in preparation).
Motzoi, F., Gambetta, J. M., Rebentrost, P. & Wilhelm, F. K. Simple pulses for elimination of leakage in weakly nonlinear qubits. Phys. Rev. Lett. 103, 110501 (2009).
Chandran, V. & Elgar, S. A general procedure for the derivation of principal domains of higher-order spectra. IEEE Trans. Signal Process. 42, 229–233 (1994).
Hansen, P. C. in Computational Inverse Problems in Electrocardiology, (ed. P. Johnston) 119–142 (WIT Press, Southampton, UK 2000).
Birkelund, Y., Hanssen, A. & Powers, E. J. Multitaper estimators of polyspectra. Signal Process. 83, 545–559 (2003).
Article MATH Google Scholar
Norris, L. M. et al. Optimally band-limited spectroscopy of control noise using a qubit sensor. Phys. Rev. A 98, 032315 (2018).
Yan, F. et al. Distinguishing coherent and thermal photon noise in a circuit quantum electrodynamical system. Phys. Rev. Lett. 120, 260504 (2018).
It is a pleasure to thank K. Harrabi, M. Kjaergaard, P. Krantz, G. A. Paz-Silva, J. I. J. Wang, and R. Winik for insightful discussions, and M. Pulido for generous assistance. We thank B. M. Niedzielski for the SEM image of the device. This research was funded by the U.S. Army Research Office grant No. W911NF-14-1-0682 (to L.V. and W.D.O.); and by the Department of Defense via MIT Lincoln Laboratory under Air Force Contract No. FA8721-05-C-0002 (to W.D.O.). Y.S. and F.B. acknowledge support from the Korea Foundation for Advanced Studies and from the Fonds de Recherche du Québec – Nature et Technologies, respectively. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government.
Félix Beaudoin
Present address: NanoAcademic Technologies, 666 rue Sherbrooke Ouest, Suite 802, Montreal, Quebec, H3A 1E7, Canada
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
Youngkyu Sung, Fei Yan, Jack Y. Qiu, Uwe von Lüpke, Terry P. Orlando, Simon Gustavsson & William D. Oliver
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
Youngkyu Sung, Jack Y. Qiu, Terry P. Orlando & William D. Oliver
Department of Physics and Astronomy, Dartmouth College, Hanover, NH, 03755, USA
Félix Beaudoin, Leigh M. Norris & Lorenza Viola
MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA, 02421, USA
David K. Kim, Jonilyn L. Yoder & William D. Oliver
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
William D. Oliver
Youngkyu Sung
Leigh M. Norris
Fei Yan
David K. Kim
Jack Y. Qiu
Uwe von Lüpke
Jonilyn L. Yoder
Terry P. Orlando
Simon Gustavsson
Lorenza Viola
Y.S., F.Y., and S.G. performed the experiments. F.B. and Y.S. carried out numerical simulations and analyzed the data, and L.V., L.M.N., S.G., and W.D.O. provided feedback. L.M.N., F.B., and L.V. designed the pulse sequences and developed the estimation protocol used in the experiment. F.Y. and S.G. designed the device and D.K.K. and J.L.Y. fabricated it. J.Y.Q. and U.L. provided experimental assistance. Y.S., F.B., L.M.N., and L.V. wrote the manuscript with feedback from all authors. L.V., S.G., T.P.O., and W.D.O. supervised the project.
Correspondence to Lorenza Viola or William D. Oliver.
Peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Sung, Y., Beaudoin, F., Norris, L.M. et al. Non-Gaussian noise spectroscopy with a superconducting qubit sensor. Nat Commun 10, 3715 (2019). https://doi.org/10.1038/s41467-019-11699-4
Received: 21 March 2019
Jonas Meinel
Vadim Vorobyov
J. Wrachtrup
Multi-level quantum noise spectroscopy
Antti Vepsäläinen
Characterization and control of open quantum systems beyond quantum noise spectroscopy
Akram Youssry
Gerardo A. Paz-Silva
Christopher Ferrie
npj Quantum Information (2020) | CommonCrawl |
Kathryn Leonard
Kathryn Leonard is an American mathematician and computer scientist. Leonard received a Henry L. Alder Award from the Mathematical Association of America in 2012.[1] She received the AWM Service Award from the Association for Women in Mathematics (AWM) in 2015.[2] She served as the AWM Meetings Coordinator from 2015 - 2018.[3] She was President of the AWM and is now AWM Past-President.[4] She is also director of the NSF-funded Center for Undergraduate Research in Mathematics. She is currently on the American Mathematical Society Nominating Committee.[5]
Kathryn Leonard
NationalityAmerican
Alma materPhD, Brown University, 2004
BS, University of New Mexico
AwardsAWM Service Award
Henry L. Alder Award
Scientific career
InstitutionsCalifornia State University Channel Islands
Occidental College
Doctoral advisorDavid Mumford
Leonard's research focuses on geometric modeling with applications to computer vision, computer graphics, and data science. She has received multiple major grants, including a National Science Foundation CAREER Award.[6]
Leonard and Misha Collins, together with several other collaborators, are authors of "The 2D shape structure dataset", an article on a crowd-sourced database on the structure of shapes.[7]
References
1. "Henry L. Alder Award". Mathematical Association of America. Retrieved 7 April 2019.
2. "Association for Women in Mathematics Service Award 2015". Association for Women in Mathematics. Retrieved 7 April 2019.
3. "History of the AWM". Association for Women in Mathematics. Retrieved 13 April 2019.
4. "Executive Committee". Association for Women in Mathematics (AWM). Retrieved 2023-03-29.
5. "AMS Committees". American Mathematical Society. Retrieved 2023-03-29.
6. "Faculty Profiles: Kathryn Leonard". CSU Channel Islands. Retrieved 7 April 2019.
7. Carlier, Axel; Leonard, Kathryn; Hahmann, Stefanie; Morin, Geraldine; Collins, Misha (August 2016). "The 2D shape structure dataset: A user annotated open access database". Computers & Graphics. 58: 23–30. doi:10.1016/j.cag.2016.05.009.
External links
• Official website
Presidents of the Association for Women in Mathematics
1971–1990
• Mary W. Gray (1971–1973)
• Alice T. Schafer (1973–1975)
• Lenore Blum (1975–1979)
• Judith Roitman (1979–1981)
• Bhama Srinivasan (1981–1983)
• Linda Preiss Rothschild (1983–1985)
• Linda Keen (1985–1987)
• Rhonda Hughes (1987–1989)
• Jill P. Mesirov (1989–1991)
1991–2010
• Carol S. Wood (1991–1993)
• Cora Sadosky (1993–1995)
• Chuu-Lian Terng (1995–1997)
• Sylvia M. Wiegand (1997–1999)
• Jean E. Taylor (1999–2001)
• Suzanne Lenhart (2001–2003)
• Carolyn S. Gordon (2003–2005)
• Barbara Keyfitz (2005–2007)
• Cathy Kessel (2007–2009)
• Georgia Benkart (2009–2011)
2011–0000
• Jill Pipher (2011–2013)
• Ruth Charney (2013–2015)
• Kristin Lauter (2015–2017)
• Ami Radunskaya (2017–2019)
• Ruth Haas (2019–2021)
• Kathryn Leonard (2021–2023)
• Talitha Washington (2023–2025)
Authority control
International
• VIAF
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
| Wikipedia |
Universal space
In mathematics, a universal space is a certain metric space that contains all metric spaces whose dimension is bounded by some fixed constant. A similar definition exists in topological dynamics.
Definition
Given a class $\textstyle {\mathcal {C}}$ of topological spaces, $\textstyle \mathbb {U} \in {\mathcal {C}}$ is universal for $\textstyle {\mathcal {C}}$ if each member of $\textstyle {\mathcal {C}}$ embeds in $\textstyle \mathbb {U} $. Menger stated and proved the case $\textstyle d=1$ of the following theorem. The theorem in full generality was proven by Nöbeling.
Theorem:[1] The $\textstyle (2d+1)$-dimensional cube $\textstyle [0,1]^{2d+1}$ is universal for the class of compact metric spaces whose Lebesgue covering dimension is less than $\textstyle d$.
Nöbeling went further and proved:
Theorem: The subspace of $\textstyle [0,1]^{2d+1}$ consisting of set of points, at most $\textstyle d$ of whose coordinates are rational, is universal for the class of separable metric spaces whose Lebesgue covering dimension is less than $\textstyle d$.
The last theorem was generalized by Lipscomb to the class of metric spaces of weight $\textstyle \alpha $, $\textstyle \alpha >\aleph _{0}$: There exist a one-dimensional metric space $\textstyle J_{\alpha }$ such that the subspace of $\textstyle J_{\alpha }^{2d+1}$ consisting of set of points, at most $\textstyle d$ of whose coordinates are "rational" (suitably defined), is universal for the class of metric spaces whose Lebesgue covering dimension is less than $\textstyle d$ and whose weight is less than $\textstyle \alpha $.[2]
Universal spaces in topological dynamics
Consider the category of topological dynamical systems $\textstyle (X,T)$ consisting of a compact metric space $\textstyle X$ and a homeomorphism $\textstyle T:X\rightarrow X$. The topological dynamical system $\textstyle (X,T)$ is called minimal if it has no proper non-empty closed $\textstyle T$-invariant subsets. It is called infinite if $\textstyle |X|=\infty $. A topological dynamical system $\textstyle (Y,S)$ is called a factor of $\textstyle (X,T)$ if there exists a continuous surjective mapping $\textstyle \varphi :X\rightarrow Y$ which is equivariant, i.e. $\textstyle \varphi (Tx)=S\varphi (x)$ for all $\textstyle x\in X$.
Similarly to the definition above, given a class $\textstyle {\mathcal {C}}$ of topological dynamical systems, $\textstyle \mathbb {U} \in {\mathcal {C}}$ is universal for $\textstyle {\mathcal {C}}$ if each member of $\textstyle {\mathcal {C}}$ embeds in $\textstyle \mathbb {U} $ through an equivariant continuous mapping. Lindenstrauss proved the following theorem:
Theorem[3]: Let $\textstyle d\in \mathbb {N} $. The compact metric topological dynamical system $\textstyle (X,T)$ where $\textstyle X=([0,1]^{d})^{\mathbb {Z} }$ and $\textstyle T:X\rightarrow X$ is the shift homeomorphism $\textstyle (\ldots ,x_{-2},x_{-1},\mathbf {x_{0}} ,x_{1},x_{2},\ldots )\rightarrow (\ldots ,x_{-1},x_{0},\mathbf {x_{1}} ,x_{2},x_{3},\ldots )$
is universal for the class of compact metric topological dynamical systems whose mean dimension is strictly less than $\textstyle {\frac {d}{36}}$ and which possess an infinite minimal factor.
In the same article Lindenstrauss asked what is the largest constant $\textstyle c$ such that a compact metric topological dynamical system whose mean dimension is strictly less than $\textstyle cd$ and which possesses an infinite minimal factor embeds into $\textstyle ([0,1]^{d})^{\mathbb {Z} }$. The results above implies $\textstyle c\geq {\frac {1}{36}}$. The question was answered by Lindenstrauss and Tsukamoto[4] who showed that $\textstyle c\leq {\frac {1}{2}}$ and Gutman and Tsukamoto[5] who showed that $\textstyle c\geq {\frac {1}{2}}$. Thus the answer is $\textstyle c={\frac {1}{2}}$.
See also
• Universal property
• Urysohn universal space
• Mean dimension
References
1. Hurewicz, Witold; Wallman, Henry (2015) [1941]. "V Covering and Imbedding Theorems §3 Imbedding of a compact n-dimensional space in I2n+1: Theorem V.2". Dimension Theory. Princeton Mathematical Series. Vol. 4. Princeton University Press. pp. 56–. ISBN 978-1400875665.
2. Lipscomb, Stephen Leon (2009). "The quest for universal spaces in dimension theory" (PDF). Notices Amer. Math. Soc. 56 (11): 1418–24.
3. Lindenstrauss, Elon (1999). "Mean dimension, small entropy factors and an embedding theorem. Theorem 5.1". Inst. Hautes Études Sci. Publ. Math. 89 (1): 227–262. doi:10.1007/BF02698858. S2CID 2413058.
4. Lindenstrauss, Elon; Tsukamoto, Masaki (March 2014). "Mean dimension and an embedding problem: An example". Israel Journal of Mathematics. 199 (2): 573–584. doi:10.1007/s11856-013-0040-9. ISSN 0021-2172. S2CID 2099527.
5. Gutman, Yonatan; Tsukamoto, Masaki (2020-07-01). "Embedding minimal dynamical systems into Hilbert cubes". Inventiones Mathematicae. 221 (1): 113–166. arXiv:1511.01802. Bibcode:2020InMat.221..113G. doi:10.1007/s00222-019-00942-w. ISSN 1432-1297. S2CID 119139371.
| Wikipedia |
\begin{document}
\draft
\title{Decay of an excited atom near an absorbing microsphere}
\author{Ho Trung Dung\cite{byline}, Ludwig Kn\"{o}ll, and Dirk-Gunnar Welsch} \address{ Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universit\"{a}t Jena, Max-Wien-Platz 1, 07743 Jena, Germany}
\date{December 19, 2000} \maketitle
\begin{abstract} Spontaneous decay of an excited atom near a dispersing and absorbing microsphere of given complex permittivity that satisfies the Kramers-Kronig relations is studied, with special emphasis on a Drude-Lorentz permittivity. Both the whispering gallery field resonances below the band gap (for a dielectric sphere) and the surface-guided field resonances inside the gap (for a dielectric or a metallic sphere) are considered. Since the decay rate mimics the spectral density of the sphere-assisted ground-state fluctuation of the radiation field, the strengths and widths of the field resonances essentially determine the feasible enhancement of spontaneous decay. In particular, strong enhancement can be observed for transition frequencies within the interval in which the surface-guided field resonances strongly overlap. When material absorption becomes significant, then the highly structured emission pattern that can be observed when radiative losses dominate reduces to that of a strongly absorbing mirror. Accordingly, nonradiative decay becomes dominant. In particular, if the distance between the atom and the surface of the microsphere is small enough, the decay becomes purely nonradiative. \end{abstract}
\pacs{PACS numbers: 42.50.Ct, 42.60.Da, 42.50.Lc}
\begin{multicols}{2}
\section{Introduction} \label{sec1}
Light propagating in dielectric spheres can be trapped by repeated total internal reflections. When the round-trip optical paths fit integer numbers of the wavelength, whispering gallery (WG) waves are formed, which combine extreme photonic confinement with very high quality factors. The frequencies and linewidths of WG waves are highly sensitive to strain, temperature, and other parameters of the surrounding environment in general. The unique properties of WG waves are crucial to cavity-QED and various optoelectronics applications~\cite{1}. WG waves with $Q$-values larger than $10^9$ have been reported for fused-silica microspheres~\cite{2,3,4} and liquid hydrogen droplets~\cite{5}, and the ultimate level determined by intrinsic material absorption has been achieved~\cite{3}.
Since the spontaneous decay of an excited atom depends on both the atom and the ambient medium \cite{6}, it can be expected that if an atom is near a dielectric microsphere, its spontaneous decay sensitively responds to the sphere-assisted electromagnetic field structure. In the case of an atom in free space, the well-known continuum of free-space density of modes of radiation is available for an emitted photon. By redistributing the density of possible field excitations in the presence of dielectric bodies, the radiative properties of excited atoms can be controlled. Experimental observations of lifetime modifications of ions or dye molecules embedded in microspheres or droplets \cite{9}, cavity QED effects in the coupling of a dilute cesium vapor to the external evanescent field of a WG mode in a fused silica microsphere \cite{10}, and detection of individual spatially constrained and oriented molecules on the surfaces of glycerol microdroplets~\cite{11} have been reported. In Refs.~\cite{7,8}, the decay rate of a two-level atom located in the vicinity of a dielectric sphere was calculated and enhancement factors of hundreds were predicted. The calculations are based on the assumption that there is no material absorption. In practice however, there exists always some material absorption that, even when it is small, can still affect the system in a substantial way. Knowledge of both the radiative losses and the losses due to material absorption is crucial, e.g., in designing ultralow threshold microsphere lasers using cooled atoms or single quantum dots \cite{13}.
In the present paper we study the problem of spontaneous decay of an excited atom near an absorbing microsphere of given complex permittivity that satisfies the Kramers-Kronig relations, applying the formalism developed in Refs.~\cite{12,19a}. This enables us to take into account both dispersion and absorption in a consistent way, avoiding any restrictive conditions with respect to the frequency domain. In the numerical calculations we assume that the permittivity of the microsphere depends on the frequency according to the Drude-Lorentz model, which covers dielectric and metallic matter.
The paper is organized as follows. In Sec.~\ref{sec2} the radiation field resonances associated with a microsphere are examined. Equations for determining the positions and widths of both the WG field resonances below a band gap and the surface-guided (SG) field resonances inside a band gap are given. The corresponding quality factors are calculated and the effects of dispersion, absorption, and cavity size are studied. Further, the spatial distribution of the cavity-assisted ground-state fluctuation of the radiation field is studied. In particular, the wings outside the microsphere of the fluctuation of the WG and SG waves are just responsible for exciting such waves in the spontaneous decay of an excited atom that is situated near the surface of the sphere.
In Sec.~\ref{sec3} the rate of spontaneous decay, the line shift, and the (spatially resolved) intensity of the emitted light are calculated as functions of the atomic transition
\begin{figure}\label{f1a}
\end{figure}
\noindent frequency, with special emphasis on the influence of the orientation of the transition dipole moment, the distance between the atom and the microsphere, and the dispersion and absorption outside and inside a band gap. In this context, the ratio of radiative decay to nonradiative decay due to material absorption is analyzed.
In Sec.~\ref{sec4} the results derived for the general case of dielectric sphere are applied to a metallic microsphere by appropriately specifying the matter parameters. Typical features are briefly discussed. Finally, some concluding remarks are given in Sec.~\ref{sec5}.
\section{Radiation field resonances} \label{sec2}
Let us consider a
dielectric sphere of radius $R$ whose complex permittivity $\epsilon(\omega)$ $\!=$ $\!\epsilon_R(\omega)$ $\!+$ $\!i\epsilon_I(\omega)$ reads, according to a single-resonance Drude-Lorentz model, \begin{equation} \label{e10}
\epsilon(\omega) = 1 +
{\omega_P^2 \over
\omega_T^2 - \omega^2 - i\omega \gamma}\,, \end{equation} where $\omega_P$ corresponds to the coupling constant, and $\omega_T$ and $\gamma$ are respectively the medium oscillation frequency and the linewidth. An example of the dependence on frequency of the permittivity is shown in Fig.~\ref{f1a}. Note that $\epsilon(\omega)$ satisfies the Kramers-Kronig relations. {F}rom the permittivity, the complex refractive index can be obtained according to the relations \begin{eqnarray} \label{e1}
&n(\omega)=\sqrt{\epsilon(\omega)}=n_R(\omega)
+ in_I(\omega), \\ &\displaystyle\label{e1a}
n_{R(I)}(\omega) =
\sqrt{{1\over2}\left[
\sqrt{\epsilon_R^2(\omega)+\epsilon_I^2(\omega)}
+(-)\, \epsilon_R (\omega)\right]} \ . \end{eqnarray} The dielectric features a band gap between the transverse frequency $\omega_T$ and the longitudinal frequency \mbox{$\omega_L$ $\!=$ $\!\sqrt{\omega_T^2+\omega_P^2}$}. Far from the medium resonances, we typically observe that \begin{equation} \label{e1b}
\epsilon_I(\omega) \ll |\epsilon_R(\omega)|. \end{equation} For $\omega$ $\!<$ $\!\omega_T$, i.e., outside the band gap, we have \begin{eqnarray} \label{e10a}
&\epsilon_R(\omega)>1, \label{e10a1}\\&\displaystyle
n_R(\omega)\simeq\sqrt{\epsilon_R(\omega)}\gg
n_I(\omega)\simeq {\epsilon_I(\omega)\over
2\sqrt{\epsilon_R(\omega)}}\,, \end{eqnarray} and for $\omega_T$ $\!<$ $\!\omega$ $\!<\omega_L$, i.e., inside the band gap, \begin{eqnarray} \label{e10b}
&\epsilon_R(\omega)<0, \\&\displaystyle
n_R(\omega)\simeq {\epsilon_I(\omega)\over
2\sqrt{|\epsilon_R(\omega)|}}\ll
n_I(\omega)\simeq \sqrt{|\epsilon_R(\omega)|}\,. \label{e10b1} \end{eqnarray} Note that $\epsilon_I(\omega)$ $\!>$ $\!0$ in any case.
Using the source-quantity representation of the quantized electromagnetic field \cite{19a,13a}, all the information about the influence of the sphere on the spontaneous decay of an atom is contained in the Green tensor of the classical, phenomenological Maxwell equations, where dispersion and absorption are fully taken into account \cite{12,19a}. In particular, the sphere-assisted (transverse) field resonances that can give rise to an enhancement of the spontaneous decay are determined by the poles of the transverse part of the Green tensor.
{F}rom the explicit form of the Green tensor as given in Appendix A, it follows, on setting the denominators in Eqs.~(\ref{A6}) [or (\ref{A7a})] and (\ref{A7}) [or (\ref{A7b})] equal to zero, that the resonances are the complex roots \begin{equation} \label{e2}
\omega = \Omega - i \delta \end{equation} of the characteristic equations \begin{equation} \label{e3}
M(\omega)=
{H'_{l+1/2}(k_1R) \over H_{l+1/2}(k_1R)}
-\sqrt{\epsilon(\omega)}\,
{J'_{l+1/2}(k_2R) \over J_{l+1/2}(k_2R)} = 0 \end{equation} for TE waves, and \begin{eqnarray} \label{e4} \lefteqn{
M(\omega)=
{H'_{l+1/2}(k_1R) \over H_{l+1/2}(k_1R)} } \nonumber\\&&\hspace{2ex}
-{1\over \sqrt{\epsilon(\omega)}}
{J'_{l+1/2}(k_2R) \over J_{l+1/2}(k_2R)}
+ {1\over 2k_1R}\left[1-{1\over\epsilon(\omega)} \right]
= 0 \end{eqnarray} for TM waves, where $k_1$ $\!=$ $\!\omega/c$, $k_2$ $\!=$ $\!\sqrt{\epsilon(\omega)}\omega/c$, and $R$ is the microsphere radius [$J_\nu(z)$ - Bessel function; $H_\nu(z)$ - Hankel function]. In Eq.~(\ref{e2}), $\Omega$ and $\delta$, respectively, are the position and the HWHM of a resonance line. Equations (\ref{e3}) and (\ref{e4}) are in agreement with the results of classical Lorenz-Mie scattering theory \cite{1}. Similar equations with real permittivity have also been derived quantum mechanically by expanding the field operators in spherical wave functions \cite{13b}. Note that these results cannot be extended to absorbing media by simply replacing the real permittivity by a complex one, but requires a more refined approach to the electromagnetic field quantization in absorbing dielectrics (for details, see \cite{13a} and references therein).
The linewidths of the resonances are determined by radiative losses associated with the input-output coupling and losses due to material absorption. For small linewidths, the total width can be regarded as being the sum of a purely radiative term and a purely absorptive term. In that case we may write \begin{equation} \label{e5b} \delta_{tot} \simeq \delta_{rad} + \delta_{abs}, \end{equation} where $\delta_{rad}$ corresponds to the linewidth obtained when the imaginary part of the permittivity is set equal to zero.
\subsection{Resonances below the band gap} \label{sec2.1}
We first consider the (transverse) field resonances below the band gap, \mbox{$\omega$ $\!<$ $\!\omega_T$}, where WG waves
can be observed. These resonances are commonly classified by means of three numbers \cite{1,2}: the angular momentum number $l$, the azimuthal number $m$, and the number $i$ of radial maxima of the field inside the sphere. In the case of a uniform sphere, the WG waves are ($2l$ $\!+$ $\!1$)-fold degenerate, i.e., the \mbox{$2l$ $\!+$ $\!1$} azimuthal resonances belong to the same frequency $\Omega_{l,i}$.
When the imaginary part of the refractive index is much smaller than the real part, $n_I(\omega)$ $\!\ll$ $\!n_R(\omega)$, then the method used in Ref.~\cite{14} for solving Eqs.~(\ref{e3}) and (\ref{e4}) in the case of constant, real refractive index formally applies also to the case of frequency-dependent, complex refractive index. Interpreting the WG waves as resulting from total internal reflection, it follows that \mbox{$R{\rm Re}\,k_2$ $\gtrsim$ $\!\nu$ $\!\gtrsim$ $\!Rk_1$}, where \mbox{$\nu$ $\!=$ $\!l+1/2$}, and $\nu$ can be assumed to be large in general. Following Ref.~\cite{14}, the complex roots $\omega_{l,i}$ of Eqs.~(\ref{e3}) and (\ref{e4}) can then formally given by \begin{equation} \label{e5} \omega_{l,i} = f_{l,i}[n(\omega_{l,i})] + O(\nu^{-1}), \end{equation} where \begin{eqnarray} \label{e5a} \lefteqn{
f_{l,i}[n(\omega_{l,i})] = {c\over Rn(\omega_{l,i})}
\Biggl\{ \nu + 2^{-1/3}\alpha_i\nu^{1/3} } \nonumber\\[.5ex]&&\hspace{2ex}
-{P\over \left[ n(\omega_{l,i})^2 - 1 \right]^{1/2} }
+ {3\over10}\,2^{-2/3} \alpha_i^2 \nu^{-1/3} \nonumber\\[.5ex]&&\hspace{2ex}
-{2^{-1/3}P \left[ n(\omega_{l,i})^2 -2P^2/3 \right]
\over \left[ n(\omega_{l,i})^2 - 1 \right]^{3/2} }
\,\alpha_i\nu^{-2/3}
\Biggr\}, \end{eqnarray} with $P$ $\!=$ $\!n(\omega_{l,i})$ and $P$ $\!=$ $\!1/n(\omega_{l,i})$, respectively, for TE and TM waves, and the $\alpha_i$ are the roots of the Airy function Ai$(-z)$. Although Eq.~(\ref{e5}) is not yet an explicit expression for the roots as in the case of a nondispersing and nonabsorbing (idealized) medium, it is numerically more tractable than the original equations and offers a first insight into the WG waves.
\subsubsection{Frequency-independent permittivity} \label{sec2.1.2}
In many cases in practice, it may be a good approximation to ignore the dependence on frequency of the refractive index within a chosen frequency interval. {F}rom Eqs.~(\ref{e5}) and (\ref{e5a}), the positions of the WG waves are then found to be \begin{equation} \label{e6}
\Omega = f(n_R) + O
\bigg[ \left(n_I\over n_R \right)^2\bigg] \ . \end{equation} For notational convenience, here and in the following we drop the classification indices. Obviously, the first term in Eq.~(\ref{e6}) provides a very good approximation for the positions of the WG waves, provided that the ratio $n_I/n_R$ is sufficiently small. Note that \mbox{$n$ $\! \sim$ $\! 1.45$ $\! +$ $\! i 10^{-11}$} for silica \cite{3} and {$n$ $\! \sim$ $\! 1.47$ $\! +$ $\! i 10^{-7}$} for glycerol \cite{15} in the optical region. In other words, small material absorption does not affect much the positions of the WG waves.
The contribution to the linewidth of the material absorption can be evaluated from Eq.~(\ref{e5}) by first order Taylor expansion of $f(n)$ at $n_R$, \begin{equation} \label{e6.1}
\delta_{abs} \simeq - n_I f'(n_R) \ . \end{equation} To calculate the total line width determined by both radiative and absorption losses, one has to go back to the original equations (\ref{e3}) and (\ref{e4}), since the radiative losses are essentially determined by the (disregarded) term $O(\nu^{-1})$ in Eq.~(\ref{e5}). First-order Taylor expansion of $M(\omega)$ around $\Omega$ yields \begin{equation} \label{e7}
\delta_{tot} \simeq {\rm Im}
\left[ {M(\Omega) \over M'(\Omega)} \right], \end{equation} where \begin{equation} \label{e8}
M'(\Omega) \simeq \epsilon-1 \end{equation} for TE waves, and \begin{equation} \label{e8a}
M'(\Omega) \simeq (\epsilon-1)
\left[\left(1+\frac{1}{\epsilon}\right)
\left(\frac{c\nu}{R\Omega}\right)^2 -1
\right] \end{equation} for TM waves. The radiative contribution to the line\-width can then calculated by means of Eq.~(\ref{e5b}).
{F}rom Fig.~\ref{f1} it is seen that the contribution to the linewidth of the material absorption increase with the imaginary part of the refractive index $n_I$ and the radius $R$ of the sphere. It is seen that when the values of $n_I$ and $R$ rise above some threshold values, then the absorption losses start to dominate the radiative losses. It is further seen that for chosen $n_I$ the threshold value of $R$ decreases with increasing $n_R$. For instance, the absorption losses start to dominate the radiative losses if \mbox{$R$ $\!\gtrsim$ $\! 8\lambda$} for \mbox{$n$ $\!=1.45$ $\!+$ $\!i10^{-9}$} [see Fig. \ref{f1}(a)], and if \mbox{$R$ $\!\gtrsim$ $\! 1.6\lambda$} for \mbox{$n$ $\!=$ $\!2.45+i10^{-9}$} [see Fig. \ref{f1}(b)]. It should be noted, that the approximate results based on Eqs.~(\ref{e6.1}) and (\ref{e7}) are in good agreement with the exact ones (without dispersion).
\begin{figure}
\caption{ The ratio of the total width $\delta_{tot}$ and the radiative width $\delta_{rad}$ of TM$_{l,1}$ WG resonances is shown as a function of the imaginary part $n_I$ of the refractive index for two values of the real part $n_R$ of the refractive index and various values of $l$. The values \mbox{$l$ $\!=$ $\!70,\,60,\,50$} correspond to \mbox{$R/\lambda$ $\!\simeq$ $\!8.4,\,7.3,\,6.2$}, respectively, and the values \mbox{$l$ $\!=$ $\!22,\,20,\,15$} correspond to \mbox{$R/\lambda$ $\!\simeq$ $\!1.8,\,1.6,\,1.3$}, respectively. }
\label{f1}
\end{figure}
Introducing the quality factor $Q$ $\!=$ $\Omega/(2\delta)$, the ab\-sorp\-tion-assisted part $Q_{abs}$ $\!=$ $\!\Omega/(2\delta_{abs})$ is derived from Eqs.~(\ref{e5a})--(\ref{e6.1}) to be \begin{eqnarray} \label{e9}
Q_{abs} = {n_R\over 2n_I}
+ {n_R\over 2n_I}{(2n_R-P) \over
(n_R^2-1)^{3/2} } \,\nu^{-1}
+ O\bigl(\nu^{-5/3}\bigr) \end{eqnarray} [$P$ $\!=$ $\!n_R$ $(1/n_R)$ for TE (TM) waves]. When the absorption losses dominate the radiative ones, it is frequently assumed \cite{3,16} that the quality factor is simply given by the ratio of the (plane-wave) absorption length and the wavelength, i.e., $Q$ $\!=$ $\!n_R/(2n_I)$, which is just the first term on the right-hand side in Eq.~(\ref{e9}). As long as the values of $\nu$ and/or $n_R$ are large, which is the case for WG waves, the second term on the right-hand side in Eq.~(\ref{e9}) is typically less than a few percent, and the first term is indeed the leading term. Nevertheless, Eq.~(\ref{e9}) may become significantly wrong, because dispersion is fully disregarded. Moreover, the radiative losses can dominate material absorption.
\subsubsection{Frequency-dependent permittivity} \label{sec2.1.3}
In order to obtain the exact quality factor in dependence on the resonance frequency, the permittivity as a function of frequency must be known and the calculations should be based on the original equations (\ref{e3}) and (\ref{e4}). In particular, the quality factor $Q_{rad}$, which accounts for the line broadening associated with the input--output coupling, can be obtained by setting the imaginary part of the permittivity equal to zero, so that \mbox{$Q$ $\!=$ $\!Q_{rad}$}. Recalling Eq.~(\ref{e5b}), we may write \mbox{$1/Q$ $\!=$ $1/Q_{tot}$ $\!=$ $\!1/Q_{rad}$ $\!+$ $\!1/Q_{abs}$}, from which the quality factor $Q_{abs}$, which accounts for the line broadening due to material absorption, can then be calculated. Typical results obtained for TM WG waves on the basis of the model permittivity in Eq.~(\ref{e10}) are shown in Fig.~\ref{f12}.
{F}rom Fig.~\ref{f12}(a) it is seen that (for chosen radius of the microsphere) $Q_{abs}$ decreases with increasing frequency (i.e., increasing $n_I$ $\!\sim$ $\!\epsilon_I$), whereas $Q_{rad}$ increases with increasing frequency (i.e., increasing $n_R$ $\!\sim$ $\!\sqrt{\epsilon_R}$). Sufficiently below the band gap where $n_R$ is small and the radiative losses dominate, $Q_{tot}$ follows $Q_{rad}$. With increasing frequency the absorption losses increase and can eventually dominate the radiative losses, so that now $Q_{tot}$ follows $Q_{abs}$.
In Fig.~\ref{f12}(b), the values of $Q_{abs}$ calculated from the exact equation (\ref{e4}) are compared with the approximate values according to Eq.~(\ref{e9}). It is seen that the neglect of dispersion in Eq.~(\ref{e9}) leads to some underestimation of the absorption-assisted quality factor. The difference between the exact and the approximate values increase with frequency, because of the increasing dispersion. We have also calculated $Q_{abs}$ from the approximate equation (\ref{e5}) [together with Eq.~(\ref{e5a})] and found a good agreement with the exact result.
Figure \ref{f12}(c) illustrates the influence on the quality factor of the radius of microsphere and the strength of absorption. As expected, for lower frequencies where $Q_{tot}$ follows $Q_{rad}$, the quality is improved if the radius of the sphere is increased. For higher frequencies where $Q_{tot}$ follows $Q_{abs}$, the quality becomes independent of the radius. Note that with increasing radius the (frequency) distance between neighboring resonances decreases.
\subsection{Resonances inside the band gap} \label{sec2.2}
Let us now look for (transverse) field resonances inside the band gap, which is a strictly forbidden zone for nonabsorbing bulk material. Inside the gap, we may assume that the real part of the refractive index is much smaller than the imaginary part, \mbox{$n_R(\omega)$ $\! \ll$ $\!n_I(\omega)$}. Using Bessel-function expansion (Appendix B), it can be shown that there are no TE resonances [Eq.~(\ref{e3}) has no solutions], and the complex roots of Eq.~(\ref{e4}), which determine the TM resonances, can formally be given by
\begin{figure}\label{f12}
\end{figure}
\begin{equation} \label{e11}
\omega_l = {c\over R}
\Biggl[ \nu\sqrt{1\!+\!{1\over \epsilon(\omega_l)}}
+ {\epsilon(\omega_l)^2\!+\!\epsilon(\omega_l)\!+\!1
\over 2\epsilon(\omega_l)
\sqrt{-\epsilon(\omega_l)\!-\!1} }
\Biggr]
+ O\bigl(\nu^{-1}\bigr), \end{equation} where the condition
\begin{figure}\label{f2}
\end{figure}
\begin{equation} \label{e11a}
\epsilon_R(\omega_l) < -1 \end{equation} has been required to be satisfied. Note that for the Drude-Lorentz model (\ref{e10}), condition (\ref{e11a}) leads to \begin{equation} \label{e12}
\Omega_l={\rm Re}\, \omega_l <
\sqrt{\omega_T^2+{\textstyle\frac{1}{2}}\omega_P^2} \, . \end{equation}
The total linewidth can be determined according to Eq.~(\ref{e7}), which is still valid here. If we ignore the dependence on frequency of the permittivity, then the expression on the right-hand side of Eq.~(\ref{e11}) can be regarded as a function of the refractive index. The contribution to the linewidth of the material absorption can then be calculated by first-order Taylor expansion at $n_I$ of this function in close analogy to Eq.~(\ref{e6.1}) (with the roles of $n_R$ and $n_I$ being exchanged). In terms of the quality factor \mbox{$Q_{abs}$ $\!=$ $\!\Omega/(2\delta_{abs})$} the result reads \begin{equation} \label{e12.1} Q_{abs} = \frac{n_I(n_I^2\!-\!1)}
{2n_R} + O\bigl(\nu^{-1}\bigr). \end{equation} Since now $n_R$ is proportional to $\gamma$, $Q_{abs}$ is again proportional to $\gamma^{-1}$.
It is well known that when the condition (\ref{e11a}) is satisfied, then surface-guided (SG) waves can be excited -- waves that are bound to the interface and whose amplitudes are
damped into either of the neighboring media \cite{17a}.
Typical examples are surface phonon polaritons for dielectrics and surface plasmon polaritons for metals. Obviously, the TM resonances as determined by Eq.~(\ref{e11}) correspond to SG waves of a sphere. Note that for the SG waves, in contrast to the WG waves, each angular momentum number $l$ is associated with only one wave.
Examples of the frequencies and quality factors of SG waves are plotted in Fig.~\ref{f2}.
{F}rom a comparison with Fig.~\ref{f12} it is seen that the quality factors of the SG waves are comparable with those of the WG waves and also behave in a similar way as the latter. So, the radiative losses are the dominant ones for lower-order resonances, whereas for higher-order resonances the losses essentially arise from material absorption [Figs.~\ref{f2}(a) and (c)], and the radiative losses can be reduced by increasing the radius of the sphere [Fig.~\ref{f2}(c)]. Further, a neglect of dispersion again leads to some overestimation of material absorption [Fig.~\ref{f2}(b)].
\subsection{Ground-state fluctuations} \label{sec2.3}
When an excited atom is in free space, then its coupling to the vacuum-field fluctuation gives rise to spontaneous decay.
In the presence of dielectric bodies, the fluctuation of the electromagnetic field with respect to the ground-state of the combined system that consists of the radiation field and the dielectric matter must be considered.
The electric-field correlation in the ground state can be characterized by the correlation function
\begin{figure}\label{f21}
\end{figure}
\begin{eqnarray} \label{e12a} \lefteqn{
\langle 0| \underline{\hat{E}}_i({\bf r},\omega)
\underline{\hat{E}}^\dagger_j({\bf r}',\omega')
|0\rangle } \nonumber\\ &&\hspace{2ex}
= {\hbar\omega^2\over \pi\epsilon_0 c^2}
{\rm Im}\ G_{ij}({\bf r},{\bf r}',\omega)
\delta(\omega-\omega') \ , \end{eqnarray}
\begin{figure}
\caption{ The same as in Fig.~\protect\ref{f21}, but for outside the sphere. }
\label{f22}
\end{figure}
\noindent where $\underline{\hat{E}}_i({\bf r},\omega)$ is the electric-field operator in the frequency domain and $G_{ij}({\bf r},{\bf r}',\omega)$ is the classical Green tensor of the dielectric-matter formation of given complex permittivity $\epsilon({\bf r},\omega)$ that satisfies the Kramers-Kronig relations. For equal spatial arguments, the power spectrum of the ground-state fluctuation can be obtained according to the relation \begin{equation} \label{e12b}
\langle 0| \underline{\hat{E}}_i({\bf r},\omega)
\underline{\hat{E}}^\dagger_j({\bf r},\omega')
|0\rangle =
P_{ij}({\bf r},\omega)
\delta(\omega-\omega'), \end{equation} which together with Eq.~(\ref{e12a}) reveals that \begin{equation} \label{e12c}
P_{ij}({\bf r},\omega) =
{\hbar\omega^2\over \pi\epsilon_0 c^2}\,
{\rm Im}\ G_{ij}({\bf r},{\bf r},\omega). \end{equation}
{F}rom the linearity of the Maxwell equations it follows that the Green tensor for a dielectric body can be decomposed as [cf. Eq.~(\ref{A1})] \begin{equation} \label{e12d}
G_{ij}({\bf r},{\bf r}',\omega) =
G_{ij}^0({\bf r},{\bf r}',\omega) +
G_{ij}^R({\bf r},{\bf r}',\omega), \end{equation} where $G_{ij}^0$ is the Green tensor for either the vacuum (outside the body) or the bulk material (inside the body), and the scattering term $G_{ij}^R$ ensures the correct boundary conditions (at the surface of discontinuity). Inside the body ${\rm Im}\,G_{ij}^0$ becomes singular for \mbox{${\bf r}$ $\!\to$ $\!{\bf r}'$}, and some regularization (e.g., averaging over a small volume) is required. Since we are only interested in the spatial variation of the ground-state fluctuation, we may disregard the (space-independent) contribution that arises from ${\rm Im}\,G_{ij}^0$.
The spatial variation of the radial diagonal component $P_{rr}$ inside and outside the microsphere under consideration is illustrated in Figs.~\ref{f21} and \ref{f22}, respectively. Note that $P_{rr}$ only refers to the TM field noise [cf. Eqs. (\ref{A3}) -- (\ref{A5})]. It is seen that the fluctuation associated with the WG-type field [Figs.~\ref{f21}(a) and \ref{f22}(a)] essentially concentrates inside the sphere, whereas the fluctuation associated with the SG-type field [Figs.~\ref{f21}(b,c) and \ref{f22}(b,c)] concentrates in the vicinity of the surface of the sphere. In both cases, a singularity is observed at the surface. In Figs.~\ref{f21}(a) and \ref{f22}(a), we have restricted our attention to a single-maximum WG field, i.e., \mbox{$i$ $\!=$ $\!1$}. Otherwise \mbox{$i$ ($>$ $\!1$)} maxima would be observed inside the sphere.
The strong enhancement of the fluctuation which leads to the singularity at the surface comes from absorption and gives rise to nonradiative energy transfer from the atom to the medium
(see Sec.~\ref{sec3.2}). Clearly, for sufficiently small absorption the divergence becomes meaningless, because the required (small) spatial resolution would be illicit. {F}rom Figs.~\ref{f21}(a) and \ref{f22}(a) it is seen that material absorption reduces the fluctuation of the WG field inside and outside the sphere, and Figs.~\ref{f21}(b,c) and \ref{f22}(b,c) show that absorption changes the area on either side of the surface over which the SG field fluctuation extends. As expected, the effects are less pronounced for low-order resonances where the radiative losses are the dominant ones [compare Figs.~\ref{f21}(b) and (c) with Figs.~\ref{f22}(b) and (c) respectively].
\section{Spontaneous decay} \label{sec3}
\subsection{Basic formulae} \label{sec3.1}
In the weak-coupling regime, an excited atom near a a dispersing and absorbing body decays exponentially, where the decay rate is given by
\begin{equation} \label{e13}
A = {2k_A^2\mu_i\mu_j\over \hbar\epsilon_0}\,
{\rm Im}\,G_{ij}({\bf r}_A,{\bf r}_A,
\omega_A), \end{equation} ($k_A$ $\!=$ $\!\omega_A/c$; $\bbox{\mu}$, atomic transition dipole moment),
and the contribution of the body to the shift of the transition frequency reads \begin{equation} \label{e141} \delta\omega_A
= {\mu_i\mu_j\over \pi\hbar\epsilon_0}\,
{\cal P}\!\int_0^\infty d\omega\, {\omega^2\over c^2}
\frac{{\rm Im}\,G_{ij}^{R}({\bf r}_A,{\bf r}_A,\omega)}
{\omega-\omega_A}\,, \end{equation} which can be rewritten, on using the Kramers-Kronig relations, as \begin{eqnarray} \label{e14} \lefteqn{ \delta\omega_A
= {k_A^2\mu_i\mu_j\over \hbar\epsilon_0}\,
\biggl[
{\rm Re}\,G_{ij}^R({\bf r}_A,{\bf r}_A,\omega_A) } \nonumber\\&&\hspace{2ex}
-\, {{\cal P}\over \pi}\!\int_0^\infty d\omega\,
{\omega^2\over \omega_A^2}
\frac{{\rm Im}\,G_{ij}^R({\bf r}_A,{\bf r}_A,\omega)}
{\omega+\omega_A}
\biggr] \end{eqnarray} (for details, see \cite{12,19a}).
It is not difficult to see that in Eq.~(\ref{e14}) the second term, which is not sensitive to the atomic transition frequency,
is small compared to the first one and can therefore be neglected. Note that the vacuum Lamb shift (see, e.g., \cite{20}) may be thought of as being already included in the atomic transition frequency.
The intensity of the spontaneously emitted light registered by a pointlike photodetector at position ${\bf r}$ and time $t$ reads \cite{12}
\begin{eqnarray} \label{e14a} \lefteqn{
I({\bf r},t) =
\sum_i \biggl| {k_A^2\mu_j \over \pi\epsilon_0}
\int_0^t dt' \Big[ C_{u}(t') } \nonumber \\ && \hspace{2ex} \times
\int_0^\infty d\omega\,
{\rm Im}\, G_{ij} ({\bf r},{\bf r}_A,\omega)
e^{-i(\omega-\omega_A)(t-t')}
\Big]\biggr|^2 , \end{eqnarray} where $C_{u}(t)$ is the probability amplitude of finding the atom in the upper state. Note that Eq.~(\ref{e14a}) is valid for an arbitrary coupling regime. In particular, in the weak coupling regime, where the Markov approximation applies,
$C_{u}(t')$ can be taken at \mbox{$t'$ $\!=$ $\!t$} and put in front of the time integral in Eq.~(\ref{e14a}), with $C_{u}(t)$ being simply the exponential \begin{equation} \label{e14b}
C_{u}(t)=e^{ \left(-A/2 + i\delta\omega_A \right)t} \ . \end{equation} Equation (\ref{e14a}) thus simplifies to \begin{equation} \label{e15}
I({\bf r},t) \simeq
|{\bf F}({\bf r},{\bf r}_A,\omega_A)|^2
e^{-At} , \end{equation} where \begin{eqnarray} \label{e16} \lefteqn{
F_i({\bf r},{\bf r}_A,\omega_A) =
-{ik_A^2\mu_j \over \epsilon_0}
\biggl[
G_{ij}({\bf r},{\bf r}_A,\omega_A) } \nonumber\\&&\hspace{2ex}
-\, {{\cal P}\over\pi} \int_0^\infty d\omega\,
\frac{{\rm Im}\,
G_{ij}({\bf r},{\bf r}_A,\omega)}
{\omega+\omega_A}
\biggr]. \end{eqnarray} Since the second term on the right-hand side of Eq.~(\ref{e16}) is small compared to the first one, it can be omitted, and the spatial distribution of the emitted light (emission pattern) can be given by, on disregarding transit time delay, \begin{equation} \label{e16a}
|{\bf F}({\bf r},{\bf r}_A,\omega_A)|^2
\simeq \sum_i
\biggl| {k_A^2\mu_j \over \epsilon_0}
G_{ij}({\bf r},{\bf r}_A,\omega_A) \biggr|^2. \end{equation}
Material absorption gives rise to nonradiative decay. Obviously, the rate (\ref{e13}) as the total decay rate describes both radiative decay and nonradiative decay. The fraction of emitted radiation energy can be obtained by integration of $I({\bf r},t)$ with respect to time and integration over the surface of a sphere whose radius is much larger than the extension of the system,
\begin{equation} \label{e16b} W = 2c\epsilon_0\int_0^\infty dt \int_0^{2\pi} d\phi
\int_0^\pi d\theta\, \rho^2 \sin\theta \,I({\bf r},t) \end{equation} ($\bbox{\rho}$ $\!=$ $\!{\bf r}$ $\!-$ $\!{\bf r}_A$).
The ratio $W/W_0$, where \mbox{$W_0$ $\!=$ $\!\hbar\omega_A$} is the emitted energy in free space, then gives us a measure of the emitted energy, and accordingly, \mbox{$1$ $\!-$ $\!W/W_0$} measures the energy absorbed by the body.
\subsection{Spontaneous decay rate} \label{sec3.2}
Using Eqs.~(\ref{A1}) -- (\ref{A3}) and (\ref{A3b}) -- (\ref{A5}), the decay rate (\ref{e13}) for a (with respect to the microsphere) radially oriented transition dipole moment can be given by
\begin{eqnarray} \label{e17} \lefteqn{
A^\perp = A_0 \biggl\{ 1
+{\textstyle{3\over 2}} \sum_{l=1}^\infty l(l+1)(2l+1) } \nonumber\\&&\hspace{2ex} \times\;
{\rm Re}\,\biggl[ {\cal B}^N_l\!(\omega_A)
\biggl( {h^{(1)}_l({k_Ar_A}) \over {k_Ar_A}} \biggr)^2 \biggr]
\biggr\}, \end{eqnarray} and for a tangential dipole it reads
\begin{eqnarray} \label{e18} \lefteqn{
A^\parallel = A_0 \biggl\{ 1
+ {\textstyle{3\over 4}} \sum_{l=1}^\infty (2l+1) } \nonumber\\&&\hspace{2ex}\times\;
{\rm Re}\,\biggl[ {\cal B}^M_l\!(\omega_A)
\left( h^{(1)}_l({k_Ar_A}) \right)^2 \nonumber\\&&\hspace{2ex}
+\,{\cal B}^N_l\!(\omega_A)
\biggl( { \bigl[{k_Ar_A} h^{(1)}_l({k_Ar_A})\bigr]^\prime
\over {k_Ar_A} } \biggr)^2
\biggr]
\biggr\}, \end{eqnarray} where the prime indicates the derivative with respect to $k_Ar_A$, and
\begin{equation} \label{e20}
A_0 = {k_A^3\mu^2 \over 3\hbar\pi\epsilon_0} \end{equation} is the rate of spontaneous emission in free space. Note that a radially oriented transition dipole moment only couples to TM waves, whereas a tangentially oriented dipole moment couples to both TM and TE waves.
Equations (\ref{e17}) and (\ref{e18}) generalize the results obtained for nondispersing and nonabsorbing matter whose resonance frequencies are far from the atomic transition frequencies, i.e., \mbox{$\epsilon$ $\!=$ $\epsilon_R$ $\!>$ $\!1$}, \cite{7,8} to arbitrary Kramers-Kronig consistent matter, without placing restrictions to the transition frequency.
\begin{figure}\label{f3}
\end{figure}
When the atom is very close to the microsphere, Eqs.~(\ref{e17}) and (\ref{e18}) simplify to (see Appendix \ref{appC}) \begin{equation} \label{E21}
A^\perp = {3A_0\over 4k_A^3}
{\epsilon_I(\omega_A)
\over |\epsilon(\omega_A)+1|^2}
{1\over (\Delta r)^3} + O(1) \end{equation} and \begin{equation} \label{E22}
A^\parallel = {3A_0\over 8k_A^3}
{\epsilon_I(\omega_A)
\over |\epsilon(\omega_A)+1|^2}
\left[ {1\over (\Delta r)^3}
+ {1\over 2R^2\Delta r} \right]
+ O(1), \end{equation} with \mbox{$\Delta r$ $\! =$ $\! r_A$ $\!-$ $\!R$} being the distance between the atom and the surface of the microsphere.
The terms \mbox{$\sim$ $\!(\Delta r)^{-3}$} and \mbox{$\sim$ $\!(\Delta r)^{-1}$} result from the TM near-field coupling. Since they are proportional to $\epsilon_I$, they describe nonradiative decay, i.e., energy transfer from the atom to the medium, and reflect the strong enhancement of the electromagnetic-field fluctuation as \mbox{$\Delta r$ $\!\to$ $\!0$} (see Sec.~\ref{sec2.3}). In particular, the terms \mbox{$\sim$ $\!(\Delta r)^{-3}$} in Eqs.~(\ref{E21}) and (\ref{E22}) are exactly the same as in the case when the atom is close to a planar interface \cite{23,24}. Obviously, nonradiative decay does not respond sensitively to the actual radiation-field structure. Since the distance of the atom from the surface of the microsphere must not be smaller than interatomic distances (otherwise the concept of macroscopic electrodynamics fails), the divergence at the surface (\mbox{$\Delta r$ $\!=$ $\!0$}) is not observed.
The dependence of the decay rate (\ref{e17}) on the rad\-iat\-ion-field structure, which can be observed for not too small (large) values of $\Delta r$ ($\epsilon_I$), is illustrated in Fig.~\ref{f3} for a radially oriented transition dipole moment. Since the decay rate is proportional to the spectral density of final states, its dependence on the transition frequency mimics the excitation spectrum of the sphere-assisted radiation field. As a result, both the WG and SG field resonances can strongly enhance the spontaneous decay. The enhancement decreases with increasing distance between the atom and the sphere [compare Figs.~\ref{f3}(a) an(b)]. When material absorption increases, then the resonance lines are broadened at the expense of the heights and the enhancement is accordingly reduced [see the inset in Fig.~\ref{f3}(a)].
Figure \ref{f3} reveals that the SG field resonances, which are the more intense ones in general, can give rise to a much stronger enhancement of the spontaneous decay than the WG field resonances. In particular, with increasing angular-momentum number the lines of the SG field resonances strongly overlap and huge enhancement factors can be observed for transition frequencies inside the band gap [e.g., factors of the order of magnitude of $10^4$ for the parameters chosen in Fig.~\ref{f3}(a)]. When the distance between the atom and the sphere increases, then the atom rapidly decouples from that part of the field. Thus, the huge enhancement of spontaneous decay rapidly reduces and the interval in which inhibition of spontaneous decay is typically observed, extends accordingly [see Fig.~\ref{f3}(b)].
\subsection{Lamb shift} \label{sec3.3}
Substituting expressions (\ref{A3}) and (\ref{A3b}) -- (\ref{A5}) into Eq.~(\ref{e14}), we find that the shift of the atomic transition frequency, which is caused by the presence of the microsphere, is \begin{eqnarray} \label{e23} \lefteqn{
\delta\omega_A^\perp = -
{3A_0\over 4} \sum_{l=1}^\infty l(l+1)(2l+1) } \nonumber\\&&\hspace{2ex} \times\;
{\rm Im} \biggl\{ {\cal B}^N_l\!(\omega_A)
\biggl[ {h^{(1)}_l({k_Ar_A}) \over {k_Ar_A}} \biggr]^2
\biggr\} \end{eqnarray} for a radially oriented transition dipole moment, and \begin{eqnarray} \label{e24} \lefteqn{
\delta\omega_A^\parallel = -
{3A_0\over 8} \sum_{l=1}^\infty (2l+1) } \nonumber\\&&\hspace{2ex}\times\;
{\rm Im} \biggl\{ {\cal B}^M_l\!(\omega_A)
\left[ h^{(1)}_l({k_Ar_A}) \right]^2 \nonumber\\&&\hspace{2ex}
+\,{\cal B}^N_l\!(\omega_A)
\biggl[ {\bigl[{k_Ar_A} h^{(1)}_l({k_Ar_A})\bigr]^\prime
\over {k_Ar_A}} \biggr]^2
\biggr\} \end{eqnarray} for a tangentially oscillating dipole. In particular, when the distance between the atom and the sphere is very small, then Eqs.~(\ref{e23}) and (\ref{e24}) simplify to [in close analogy to Eqs.~(\ref{E21}) and (\ref{E22})]
\begin{equation} \label{e25}
\delta\omega_A^\perp = -{3A_0\over 16k_A^3}
{|\epsilon(\omega_A)|^2-1
\over |\epsilon(\omega_A)+1|^2}
{1\over (\Delta r)^3} + O(1) \end{equation} and \begin{equation} \label{e26}
\delta\omega_A^\parallel = -{3A_0\over 32k_A^3}
{|\epsilon(\omega_A)|^2\!-\!1
\over |\epsilon(\omega_A)\!+\!1|^2}
\left[ {1\over (\Delta r)^3}
+{1\over 2R^2\Delta r} \right]
+ O(1). \end{equation} The terms \mbox{$\sim$ $\!(\Delta r)^{-3}$} and \mbox{$\sim$ $\!(\Delta r)^{-1}$} again result from the TM near-field coupling, and the leading terms agree with those obtained for a planar interface \cite{24}.
Note that in contrast to the decay rate, the Lamb shift diverges for \mbox{$\Delta r$ $\!\to$ $\!0$}
even when \mbox{$\epsilon_I$ $\!=$ $\!0$}.
The dependence of the frequency shift (\ref{e23}) on the rad\-iat\-ion-field structure for not too small values of $\Delta r$ is illustrated in Fig.~\ref{f5}.
It is seen
that each field resonance can give rise to a noticeable frequency shift in the very vicinity of the resonance frequency; transition frequencies that are lower (higher) than the resonance frequency are shifted to lower (higher) frequencies. In close analogy to the behavior of the decay rate, the shift is more pronounced for SG field resonances than for WG field resonances and can be huge for large angular momentum numbers when the lines of the SG field resonances strongly overlap.
The behavior of the Lamb shift as shown in Fig.~\ref{f5}(b) can already be seen in the single-resonance limit \cite{24a}. If the atomic transition frequency $\omega_A$ is close to a resonance frequency $\Omega$ of the microsphere, contribution from other resonances may be ignored in a first approximation. Regarding the resonance line as being a Lorentzian and using contour integration, we obtain from Eq.~(\ref{e141}) \begin{eqnarray} \label{e27}
\delta\omega_A
&\simeq& -{A(\Omega) \delta^2
\over 4\pi}\,
{\cal P}\!\! \int_{-\infty}^\infty \!\!d\omega\,
{1\over \omega\!-\!\omega_A}
{1\over (\omega\!-\!\Omega)^2\!+\!\delta^2}
\nonumber\\&=&
-{A(\Omega)\delta \over 2}
{\Delta \over \Delta^2+\delta^2} \,, \end{eqnarray} where
$A(\Omega)$ is the decay rate as given by Eq.~(\ref{e13}) (with $\Omega$ in place of $\omega_A$), and
\mbox{$\Delta$ $\!=$ $\!\omega_A$ $\!-$ $\!\Omega$}. In particular, Eq.~(\ref{e27}) indicates that the frequency shift peaks at half maximum on both sides of the resonance line.
With increasing material absorption, the linewidth
$\delta$ increases while
$A(\Omega)$ decreases
and thus the absolute values of the frequency shift are reduced, the distance between the maximum and the minimum being somewhat increased.
With decreasing distance between the atom and the microsphere near-field effects become important and Eq.~(\ref{e27}) fails, as it can be seen from a comparison of Figs.~\ref{f5}(a) and (b).
\begin{figure}\label{f5}
\end{figure}
\subsection{Emitted-light intensity} \label{sec3.4}
\subsubsection{Spatial distribution} \label{sec3.4.1}
Using Eqs.~(\ref{A1}) -- (\ref{A5}), the function ${\bf F}({\bf r},{\bf r}_A,\omega_A)$, Eq.~(\ref{e16}), which determines, according to Eq.~(\ref{e15}), the spatial distribution of the emitted light, can be given by (\mbox{$\theta_A$ $\!=$ $\!\phi_A$ $\!=$ $\!0$}, \mbox{$r_A$ $\!\le$ $\!r$}) \begin{eqnarray} \label{e28} \lefteqn{
{\bf F}^\perp({\bf r},{\bf r}_A,\omega_A) =
{k_A^3\mu \over 4\pi \epsilon_0}
\sum_{l=1}^\infty (2l+1) } \nonumber\\&&\hspace{2ex} \times\;
{1\over {k_Ar_A}}
\left[j_l({k_Ar_A}) + {\cal B}_l^N\!(\omega_A)
h_l^{(1)}({k_Ar_A}) \right] \nonumber\\&&\hspace{2ex} \times\,
\biggl[ {\bf e}_r \,l(l\!+\!1)\, {h_l^{(1)}({k_Ar})\over {k_Ar}}
\,P_l(\cos\theta) \nonumber\\&&\hspace{4ex}
-\, {\bf e}_\theta \,{[{k_Ar}\,h_l^{(1)}({k_Ar})]'\over {k_Ar}}\,
\sin\theta P_l'(\cos\theta)
\biggr] \end{eqnarray}
for a radially oriented transition dipole moment, and \begin{eqnarray} \label{e29} \lefteqn{
{\bf F}^\parallel({\bf r},{\bf r}_A,\omega_A) =
{k_A^3\mu \over 4\pi \epsilon_0}
\sum_{l=1}^\infty {(2l+1)\over l(l+1)} } \nonumber\\&&\times
\biggl\{
{\bf e}_r \cos \phi
\tilde{\cal B}_l^N l(l\!+\!1)\,{h_l^{(1)}({k_Ar})\over {k_Ar}}\,
\sin\theta P_l'(\cos\theta) \nonumber\\&&\hspace{2ex}
+ \,{\bf e}_\theta \cos \phi
\biggl[\tilde{\cal B}_l^M h_l^{(1)}({k_Ar}) P_l'(\cos\theta) \nonumber\\&&\hspace{8ex}
+ \,\tilde{\cal B}_l^N \,{[{k_Ar}\,h_l^{(1)}({k_Ar})]'\over {k_Ar}}\,
\tilde{P}_l(\cos\theta)
\biggr] \nonumber\\&&\hspace{2ex}
-\,{\bf e}_\phi \sin \phi
\biggl[\tilde{\cal B}_l^M h_l^{(1)}({k_Ar}) \tilde{P}_l(\cos\theta) \nonumber\\&&\hspace{8ex}
+\,\tilde{\cal B}_l^N \,{[{k_Ar}\,h_l^{(1)}({k_Ar})]'\over {k_Ar}}\, P_l'(\cos\theta)
\biggr] \biggr\}
\end{eqnarray} for a tangentially oriented dipole in the $xz$-plane. Here,
the abbreviating notations \begin{eqnarray} \label{e30} \lefteqn{
\tilde{\cal B}_l^N =
{1\over {k_Ar_A}}
\Big\{ [{k_Ar_A}j_l({k_Ar_A})]' } \nonumber\\&&\hspace{2ex}
+\, {\cal B}_l^N\!(\omega_A)
[{k_Ar_A}h_l^{(1)}({k_Ar_A})]'
\Big\} , \end{eqnarray} \begin{equation} \label{e31}
\tilde{\cal B}_l^M =
j_l({k_Ar_A}) + {\cal B}_l^M\!(\omega_A) h_l^{(1)}({k_Ar_A}) , \end{equation} \begin{equation} \label{e32}
\tilde{P}_l (\cos\theta)=
l(l+1)P_l(\cos\theta) - \cos\theta P'_l(\cos\theta) \end{equation} have been introduced. Examples of the far-field emission pattern of a radially oriented transition dipole moment,
$|{\bf F}^\perp({\bf r},{\bf r}_A,\omega_A)|^2$, and a tangentially oriented transition dipole moment,
$|{\bf F}^\|({\bf r},{\bf r}_A,\omega_A)|^2$, are plotted in Figs.~\ref{f6} and \ref{f7} respectively.
Let us first restrict our attention to a radially oriented transition dipole moment. In this case, the far field is essentially determined by $F^\perp_\theta$, as an inspection of Eq.~(\ref{e28}) reveals. When the atomic transition frequency coincides with the frequency of a WG wave of angular momentum number $l$ far from the band gap [Fig.~\ref{f6}(a)], then the corresponding $l$-term
in the series (\ref{e28}) obviously
yields the leading contribution to the emitted radiation, whose angular distribution is significantly determined by the term \mbox{$\sim$ $\!\sin\theta\, P'_l(\cos\theta)$}. Since $P_l(\cos\theta)$ is a polynomial with $l$ real, single roots in the interval \mbox{$0$ $\!<$ $\!\theta$ $\!<$ $\!\pi$} \cite{25}, $\sin\theta\,P'_l(\cos\theta)$ must have $l$ extrema within this interval. Thus, the emission pattern has $l$ lobes in, say, the $yz$-plane, i.e., $l$ cone-shaped peaks around the $z$-axis, because of symmetry reasons.
The lobes near \mbox{$\theta$ $\!=$ $\!0$} and \mbox{$\theta$ $\!=$ $\!\pi$} are the most dominant ones in general, because of \cite{18}
\begin{equation} \label{e33}
-\sin\theta P'_l(\cos\theta)
\sim (\sin\theta)^{-1/2} + O\bigr(l^{-1}\bigl) \end{equation} ($0$ $\!<$ $\!\theta$ $\!<$ $\!\pi$).
Note that the superposition of the leading term with the remaining terms in the series (\ref{e28}) gives rise to some asymmetry with respect to the plane \mbox{$\theta$ $\!=$ $\!\pi/2$}.
\begin{figure}\label{f6}
\end{figure}
\begin{figure}
\caption{ The same as in Fig.~\protect\ref{f6}, but for a tangentially oscillating dipole. }
\label{f7}
\end{figure}
When the atomic transition frequency approaches the band gap (but is still outside it), a strikingly different behavior is observed [Fig.~\ref{f6}(b)]. The emission pattern changes to a two-lobe structure similar to that observed in free space, but bent away from the microsphere surface, the emission intensity being very small.
Since near the band gap absorption losses dominate, a photon that is resonantly emitted is almost certainly absorbed and does not contribute to the far field in general. If the photon is emitted in a lower-order WG wave where radiative losses dominate, it has a bigger chance to escape.
The superposition of
all these weak (off-resonant) contributions
just form the two-lobe emission pattern observed, as it can also be seen by careful inspection of the series (\ref{e28}).
When the atomic transition frequency is inside the band gap and coincides with the frequency of a SG wave of low order such that the radiative losses dominate, then the emission pattern resembles that observed for resonant interaction with a low-order WG wave [compare Figs.~\ref{f6}(a) and (c)]. With increasing transition frequency the absorption losses become substantial and eventually change the emission pattern in a quite similar way as do below the band gap [compare Figs.~\ref{f6}(b) and (d)]. Obviously, the respective explanations are similar in the two cases.
For a tangentially oriented transition dipole moment the situation is not lucid in general. Let us therefore restrict our attention to the far field in the $xz$-plane, i.e., \mbox{$\phi$ $\!=$ $\!0$} in Eq.~(\ref{e29}). In this case, the main contribution to the far field comes from $F^\parallel_\theta$. The interpretation of the plots in Fig.~\ref{f7} is quite similar to that of the plots in Fig.~\ref{f6}. In particular, when the atomic transition frequency coincides with a WG field resonance that mainly suffers from radiative losses, then the $l$ term in the series (\ref{e29}) that corresponds to the order of the WG wave is the leading one, and the main contribution to it stems from the term \mbox{$\sim$ $\!l(l$ $\!+$ $\!1)P_l(\cos\theta)$}. It gives rise to \mbox{$l$ $\!-$ $\!1$} lobes in the interval \mbox{$0$ $\!<$ $\!\theta$ $\!<$ $\!\pi$},
and two lobes at \mbox{$\theta$ $\!=$ $\!0$} and \mbox{$\theta$ $\!=$ $\!\pi$}, which are the most pronounced ones [Fig.~\ref{f7}(a)]. These maxima are approximately located at the positions of the minima of the far field of the radially oscillating dipole. Hence, if the
transition dipole moment has a radial and a tangential component, a smoothed superimposed field is observed.
\subsubsection{Radiative versus nonradiative decay} \label{sec3.4.2}
Since both the imaginary part of vacuum Green tensor $G_{ij}^{0}$ and the scattering term $G_{ij}^{R}$ are transverse, the decay rate (\ref{e13}) results from the coupling of the atom to the transverse part of the electromagnetic field. Nevertheless, the decay of the excited atomic state must not necessarily be accompanied by the emission of a real photon, but instead a matter quantum can be created, because of material absorption.
To compare the two decay channels, we have calculated, according to Eq.~(\ref{e16b}), the fraction $W/W_0$ of the atomic (transition) energy that is irradiated. Using Eqs.~(\ref{e15}), (\ref{e16b}), and (\ref{e28}), we derive, on recalling the relations (\ref{A10}) and (\ref{A11}), for a radially oriented transition dipole moment
\end{multicols}
\begin{figure}
\caption{ The fraction of emitted radiation energy, Eq.~\protect(\ref{e34}), as a function of the atomic transition frequency for $\gamma/\omega_T$ and $\Delta r/\lambda_T$ equal to (a) $10^{-4}$ and $0.02$, (b) $10^{-6}$ and $0.02$, (c) $10^{-4}$ and $0.1$, and (d) $10^{-6}$ and $0.1$, respectively. The other parameters are the same as in Fig.~\protect\ref{f6}. }
\label{f8}
\end{figure}
\begin{multicols}{2}
\begin{eqnarray} \label{e34} \lefteqn{
{W\over W_0} = {3A_0\over 2 A^\perp}
\sum_{l=1}^\infty l(l+1)(2l+1) } \nonumber\\&&\hspace{2ex} \times\,
{1\over ({k_Ar_A})^2}
\left|j_l({k_Ar_A}) + {\cal B}_l^N\!(\omega_A)
h_l^{(1)}({k_Ar_A}) \right|^2. \end{eqnarray} Recall that \mbox{$W/W_0$ $\!=$ $\!1$} implies fully radiative decay, while \mbox{$W/W_0$ $\!=$ $\!0$} fully nonradiative one.
The dependence of the ratio $W/W_0$ on the atomic transition frequency is illustrated in Fig.~\ref{f8}. The minima at the WG field resonance frequencies indicate that the nonradiative decay is enhanced relative to the radiative one. Obviously, photons at these frequencies are captured inside the microsphere for some time, and hence the probability of photon absorption is increased. For transition frequencies inside the band gap, two regions can be distinguished. In the low-frequency region where low-order SG waves are typically excited radiative decay dominates. Here, the light penetration depth into the sphere is small and the probability of a photon being absorbed is small as well. With increasing atomic transition frequency the penetration depth increases and the chance of a photon to escape drastically diminishes. As a result, nonradiative decay dominates. Clearly, the strength of the effect decreases with decreasing material absorption [compare Figs.~\ref{f8}(a) and (c) with Figs.~\ref{f8}(b) and (d) respectively].
{F}rom Figs.~\ref{f8}(a) and (b) two well pronounced minima of the totally emitted light energy,
i.e., noticeable maxima of the energy transfer to the matter, are seen for transition frequencies inside the band gap.
The first minimum results from the overlapping high-order SG waves that mainly underly absorption losses. The second one is observed at the longitudinal resonance frequency of the medium.
It can be attributed to the atomic near-field interaction with the longitudinal component of the medium-assisted electromagnetic field, the strength of the longitudinal field resonance being proportional to $\epsilon_I$. Hence,
the dip at the longitudinal frequency of the emitted radiation energy reduces with decreasing material absorption [compare Fig.~\ref{f8}(a) with Fig.~\ref{f8}(b)], and it disappears when
the atom is moved sufficiently away from the surface [compare Figs.~\ref{f8}(a) and (b) with Figs.~\ref{f8}(c) and (d) respectively].
\begin{figure}\label{f10}
\end{figure}
\subsubsection{Temporal evolution} \label{sec3.4.3}
Throughout this paper we have restricted our attention to the weak-coupling regime where the excited atomic state decays exponentially, Eq.~(\ref{e14b}). When retardation is disregarded, then the intensity of the emitted light (at some chosen space point) simply decreases exponentially, Eq.~(\ref{e15}). To study the effect of retardation, we have also performed the frequency integral in the exact equation (\ref{e14a}) numerically.
Typical examples of the temporal evolution of the far-field intensity are shown in Fig.~\ref{f10} for a radially oriented transition dipole moment in the case when the atomic transition frequency coincides with the frequency of a WG wave. Whereas the long-time behavior of the intensity of the emitted light is, with little error, exponential, the short-time behavior (on a time scale given by the atomic decay time) sensitively depends on the quality factor. The observed delay between the upper-state atomic population and the intensity of the emitted light can be quite large for a high-$Q$ microsphere, because the time that a photon spends in the sphere increases with the $Q$\,value. Further, in the short-time domain some kink-like fine structure is observed, which obviously reflects the different arrival times associated with multiple reflections.
The results in Fig.~\ref{f10} refer to a fixed space point. Figure~\ref{f11} illustrates the transient behavior of the spatial distribution of the emitted light. In particular, it is seen that some time is necessary to build up the sphere-assisted spatial distribution of the emitted light which is typically observed for longer times when the approximation (\ref{e15}) applies.
\begin{figure}
\caption{ Exact [Eq.~(\protect\ref{e14a}), solid lines] and approximate [Eq.~(\protect\ref{e15}), dashed lines] angular distribution of the far-field intensity $I({\bf r},t)/(k_A^3\mu/4\pi\epsilon_0)^2$ at different times for a radially oscillating transition dipole moment [for the parameters, see Fig.~\protect\ref{f10}, curves (b)].
}
\label{f11}
\end{figure}
\section{Metallic microsphere} \label{sec4}
Setting in Eq.~(\ref{e10}) \mbox{$\omega_T$ $\!=$ $\!0$}, we obtain (within the Drude-Lorentz model) the permittivity of a metal. Hence, the results derived for the band gap of a dielectric microsphere also applies, for appropriately chosen values of $\omega_P$ and $\gamma$, to a metallic sphere. The examples plotted in Figs.~\ref{f14} and \ref{f15} refer to silver (\mbox{$\omega_P$ $\!=$ $\!1.32\times 10^{16}$\,s$^{-1}$} and \mbox{$1/\gamma$ $\!=$ $\!1.45\times 10^{-14}$\,s} \cite{17b}).
Figure \ref{f14} shows the positions and quality factors of SG field resonances for different microsphere radii. The behavior is quite similar to that observed for a dielectric microsphere [cf. Fig.~\ref{f2}]. The radiative losses decrease with increasing radius of the sphere, while the losses due to material absorption are less sensitive to the microsphere size [compare Figs.~\ref{f14}(a) and (b)]. It is further seen that $Q_{rad}$ increases with the angular momentum number of the SG wave, while $Q_{abs}$ slightly decreases.
\begin{figure}\label{f14}
\end{figure}
As a result, the radiative losses again dominate for low-order resonances and the absorption losses for high-order ones. Since absorption tends to be larger in metals than in dielectrics, the dominance of material absorption can already set in at lower-order resonances. Note that even for metals the relationship (\ref{e10b1}) typically still holds, so that Eqs.~(\ref{e11}) -- (\ref{e12.1}) apply.
The dependence of the decay rate on the transition frequency of an excited atom placed near a metallic microsphere is illustrated in Fig.~\ref{f15}(a) for a radially oriented transition dipole moment. An example of the emission pattern for the case when the atomic transition frequency coincides with the frequency of a SG wave is shown in Fig.~\ref{f15}(b). When the radius of the microsphere becomes too small, then SG waves cannot be excited. In particular, in \cite{17c} it was assumed that $R$ $\!\ll$ $\!\lambda_P$, and thus the resonances shown in Figs.~\ref{f14} and \ref{f15}(a) could not be found. It is worth noting that, in contrast to dielectric matter, a large absorption in metals can substantially
\begin{figure}
\caption{ (a) Decay rate of an atom near a a metallic microsphere of complex permittivity $\epsilon(\omega)$ [Eq.~(\protect\ref{e10}), \mbox{$\omega_T$ $\!=$ $\!0$}, \mbox{$\gamma/\omega_P$ $\!=$ $\!0.005$}] as a function of the transition frequency for a radially oriented transition dipole moment (\mbox{$R$ $\!=$ $\!5\lambda_P$}, \mbox{$\Delta r$ $\!=$ $\!0.1\,\lambda_P$}). (b) Polar diagram of the normalized far-field emission pattern
$|{\bf F}^\perp({\bf r},{\bf r}_A,\omega_A)|^2/ (k_A^3\mu / 4\pi \epsilon_0)^2$ for \mbox{$r$ $\!=$ $\!50\,\lambda_P$} and \mbox{$\omega_A/\omega_P$ $\!=$ $\!0.5026$}, the other parameters being the same as in (a). }
\label{f15}
\end{figure}
\noindent
enhance the near-surface divergence of the decay rate, [Eqs.~(\ref{E21}) and (\ref{E22})], which is in agreement with experimental observations of the fluorescence from a thin layer of optically excited organic-dye molecules that were separated from a planar metal surface by a dielectric layer of known thickness \cite{28}.
\section{Conclusions} \label{sec5}
We have applied the recently developed formalism \cite{12} to the problem of spontaneous decay of an excited atom near a dispersing and absorbing microsphere. Basing the calculations on a complex permittivity of Drude-Lorentz type, which satisfies the Kramers-Kronig relations, we have been able to study the dependence of the decay rate on the transition frequency for arbitrary frequencies. We have shown that the decay can be substantially enhanced when the transition frequency is tuned to either a WG field resonance below the band gap (for a dielectric sphere) or a SG field resonance inside the band gap (for a dielectric sphere or a metallic sphere).
Whereas for both low-order WG field resonances and low-order SG field resonances radiative losses dominate, high-order field resonances mainly suffer from material absorption. Accordingly, spontaneous decay changes from being mainly radiative to being mainly nonradiative, when the transition frequency (tuned to a field resonance) in the respective frequency interval increases. We have further
shown that in the presence of strong material absorption the decay rate drastically raises as the atom approaches the surface of the microsphere, because of near-field assisted energy transfer from the atom to the medium. Thus, the effect is typically observed for metals.
When radiative losses dominate, the emission pattern is highly structured, and a substantial fraction of the light is emitted backward and forward within small polar angles with respect to the tie line between the atom and the center of the sphere. With increasing absorption this directional characteristic is lost.
When absorption losses dominate, the (weakened) emission pattern takes a form that is typical of reflection at a mirror.
In the paper we have restricted our attention to the weak-coupling regime, assuming that the excited atomic state decays exponentially. Obviously, when the atomic transition frequency coincides with a resonance frequency of the cavity-assisted field, the strong-coupling regime may be realized. In particular, SG waves seem to be best suited for that regime, because of the noticeable enhancement of spontaneous emission. The calculations could be performed in a similar way as in Ref.~\cite{12}.
\acknowledgements
We thank F. Lederer, S. Scheel, and E. Schmidt for fruitful discussions. H.T.D. is grateful to the Alexander von Humboldt Stiftung and the Vietnamese Basic Research Program for financial support. This work was supported by the Deutsche Forschungsgemeinschaft.
\appendix
\section{The Green tensor} \label{appA}
The Green tensor of a microsphere of radius $R$ (reg\-\mbox{ion 2)} embedded in vacuum (region 1) can be decomposed into two parts \cite{26}, \begin{equation} \label{A1} \bbox{G}({\bf r},{\bf r'},\omega) = \bbox{G}^{(s)}({\bf r},{\bf r'},\omega) \delta_{fs} + \bbox{G}^{(fs)}({\bf r},{\bf r'},\omega), \end{equation} where $\bbox{G}^{(s)}({\bf r},{\bf r'},\omega)$ represents the contribution of the direct waves from the source in an unbounded space,
and $\bbox{G}^{(fs)}({\bf r},{\bf r'},\omega)$ is the scattering part that describes the contribution of the multiple reflection \mbox{($f$ $\!=$ $\!s$)} and transmission \mbox{($f$ $\!\neq$ $\!s$)} waves ($f$ and $s$, respectively, refer to the regions where are the field and source points ${\bf r}$ and ${\bf r}'$). In particular,
$\bbox{G}^{(s)}$, $\bbox{G}^{(11)}$, and $\bbox{G}^{(22)}$ can be given by \cite{26} \begin{eqnarray} \label{A2} \lefteqn{
\bbox{G}^{(s)}({\bf r},{\bf r}',\omega)
= {{\bf e}_r{\bf e}_r\over k_s^2} \delta(r-r') } \nonumber\\&&\hspace{2ex}
+{ik_s\over 4\pi} \sum_{e\atop o}
\sum_{l=1}^\infty \sum_{m=0}^l
{2l+1\over l(l+1)} {(l-m)!\over(l+m)!}\,(2\!-\!\delta_{0m}) \nonumber\\&&\hspace{2ex}
\times
\left[ {\bf M}^{(1)}_{{e \atop o}lm} ({\bf r},k_s)
{\bf M}_{{e \atop o}lm} ({\bf r}',k_s) \right. \nonumber\\&&\hspace{4ex}
\left. +\,{\bf N}^{(1)}_{{e \atop o}lm} ({\bf r},k_s)
{\bf N}_{{e \atop o}lm} ({\bf r}',k_s)
\right] \end{eqnarray} if \mbox{$r$ $\!\ge$ $\!r'$}, and \mbox{$\bbox{G}^{(s)}({\bf r},{\bf r}',\omega)$ $\!=$ $\!\bbox{G}^{(s)}({\bf r}',{\bf r},\omega)$} if \mbox{$r$ $\!<$ $\!r'$}, \begin{eqnarray} \label{A3} \lefteqn{
\bbox{G}^{(11)}({\bf r},{\bf r'},\omega) } \nonumber\\&&\hspace{2ex}
= {ik_1\over 4\pi} \sum_{e\atop o}
\sum_{l=1}^\infty \sum_{m=0}^l
{2l+1\over l(l+1)} {(l-m)!\over(l+m)!}\,(2\!-\!\delta_{0m}) \nonumber\\&&\hspace{2ex}\times
\Bigl[ {\cal B}^M_l (\omega)
{\bf M}^{(1)}_{{e \atop o}lm} ({\bf r},k_1)
{\bf M}^{(1)}_{{e \atop o}lm} ({\bf r}',k_1) \nonumber\\&&\hspace{2ex}
+\; {\cal B}^N_l (\omega)
{\bf N}^{(1)}_{{e \atop o}lm} ({\bf r},k_1)
{\bf N}^{(1)}_{{e \atop o}lm} ({\bf r}',k_1) \Bigr]
\label{A3a} \end{eqnarray} \mbox{($r,r'$ $\!>$ $\!R$)}, \begin{eqnarray} \lefteqn{
\bbox{G}^{(22)}({\bf r},{\bf r'},\omega) } \nonumber\\&&\hspace{2ex}
={ik_2\over 4\pi} \sum_{e\atop o}
\sum_{l=1}^\infty \sum_{m=0}^l
{2l+1\over l(l+1)} {(l-m)!\over(l+m)!}\,(2\!-\!\delta_{0m}) \nonumber\\&&\hspace{2ex}\times
\Bigl[ {\cal C}^M_l (\omega)
{\bf M}_{{e \atop o}lm} ({\bf r},k_2)
{\bf M}_{{e \atop o}lm} ({\bf r}',k_2) \nonumber\\&&\hspace{2ex}
+\; {\cal C}^N_l (\omega)
{\bf N}_{{e \atop o}lm} ({\bf r},k_2)
{\bf N}_{{e \atop o}lm} ({\bf r}',k_2) \Bigr]
\end{eqnarray} \mbox{($r,r'$ $\!<$ $\!R$)}, where \begin{equation} \label{A3b}
k_1={\omega\over c}, \qquad
k_2=\sqrt{\epsilon(\omega)}\,{\omega\over c}\,. \end{equation} ${\bf M}$ and ${\bf N}$ represent TE and TM waves, respectively, \begin{eqnarray} \label{A4} \lefteqn{
{\bf M}_{{e \atop o}lm}({\bf r},k)
= \mp {m \over \sin\theta} j_l(kr)
P_l^m(\cos\theta) {\sin\choose\cos} (m\phi) {\bf e}_{\theta} } \nonumber\\&&\hspace{12ex}
-\, j_l(kr) \frac{dP_l^m(\cos\theta)}{d\theta}
{\cos\choose\sin} (m\phi) {\bf e}_{\phi} \,, \end{eqnarray} \begin{eqnarray} \label{A5} \lefteqn{
{\bf N}_{{e \atop o}lm}({\bf r},k)
= {l(l\!+\!1)\over kr} j_l(kr)
P_l^m(\cos\theta) {\cos\choose\sin} (m\phi) {\bf e}_r } \nonumber\\&&\hspace{12ex}
+\,{1\over kr} \frac{d[rj_l(kr)]}{dr}
\Biggl[
\frac{dP_l^m(\cos\theta)}{d\theta}
{\cos\choose\sin} (m\phi) {\bf e}_{\theta} \nonumber \\&&\hspace{12ex}
\mp\, {m \over \sin\theta}
P_l^m(\cos\theta) {\sin\choose\cos} (m\phi) {\bf e}_{\phi}
\Biggr], \end{eqnarray} with $j_l(x)$ and $P_l^m(x)$ being respectively the spherical Bessel function of the first kind and
the associated Legendre function. The superscript ${(1)}$ in Eqs.~(\ref{A2}) and (\ref{A3}) indicates that in Eqs.~(\ref{A4}) and (\ref{A5}) the spherical Bessel function $j_l(x)$ has to be replaced by the first-type spherical Hankel function $h^{(1)}_l(x)$. The coefficients ${\cal B}^{M,N}_l$ and ${\cal C}^{M,N}_l$ in Eqs.~(\ref{A3}) and (\ref{A3a}) are defined by \begin{eqnarray} \label{A6} \lefteqn{
{\cal B}^M_l(\omega) } \nonumber\\&&\hspace{2ex}
= - \frac
{\bigl[ z_2j_l(z_2)\bigr]' j_l(z_1)
- \bigl[ z_1j_l(z_1)\bigr]' j_l(z_2) }
{\bigl[ z_2j_l(z_2)\bigr]' h_l^{(1)}(z_1)
- j_l(z_2) \bigl[z_1 h_l^{(1)}(z_1)\bigr]' }\,, \end{eqnarray} \begin{eqnarray} \label{A7} \lefteqn{
{\cal B}^N_l(\omega) } \nonumber\\&&\hspace{2ex}
= - \frac
{ \epsilon(\omega)
j_l(z_2) \bigl[z_1 j_l(z_1)\bigr]'
- j_l(z_1) \bigl[z_2 j_l(z_2)\bigr]' }
{ \epsilon(\omega)
j_l(z_2) \bigl[ z_1 h_l^{(1)}(z_1)\bigr]'
- \bigl[z_2 j_l(z_2)\bigr]' h_l^{(1)}(z_1)} \,, \end{eqnarray} \begin{eqnarray} \label{A7a} \lefteqn{
{\cal C}^M_l(\omega) } \nonumber\\&&\hspace{1ex}
= - \frac
{\bigl[ z_2h_l^{(1)}(z_2)\bigr]' h_l^{(1)}(z_1)
- \bigl[ z_1h_l^{(1)}(z_1)\bigr]' h_l^{(1)}(z_2) }
{\bigl[ z_2j_l(z_2)\bigr]' h_l^{(1)}(z_1)
- j_l(z_2) \bigl[z_1 h_l^{(1)}(z_1)\bigr]' }\,, \end{eqnarray} \begin{eqnarray} \lefteqn{
{\cal C}^N_l(\omega) } \nonumber\\&&\hspace{0ex}
= - \frac
{ \epsilon(\omega)
h_l^{(1)}(z_2) \bigl[z_1 h_l^{(1)}(z_1)\bigr]'
- h_l^{(1)}(z_1) \bigl[z_2 h_l^{(1)}(z_2)\bigr]' }
{ \epsilon(\omega)
j_l(z_2) \bigl[ z_1 h_l^{(1)}(z_1)\bigr]'
- \bigl[z_2 j_l(z_2)\bigr]' h_l^{(1)}(z_1)}\,, \nonumber\\&&\hspace{0ex} \label{A7b} \end{eqnarray} where \begin{equation} \label{A8}
z_i = k_iR \ . \end{equation}
Note that the relations \begin{equation} \int_{-1}^1 dx\,P^n_l(x)P^n_m(x) = \frac{(l+n)!}{(l-n)!(l+1/2)}\,\delta_{lm}
\label{A10} \end{equation} and \begin{equation} h^{(1)}_l(z) \rightarrow z^{-1}\exp\!\left[i(z\!-\!l\pi/2\!-\!\pi/4)\right] \quad{\rm if}\quad
|z| \rightarrow \infty \label{A11} \end{equation} are valid \cite{18}.
\section{SG field resonances} \label{appB}
For large value of \mbox{$\nu$ $\!=$ $\!l+1/2$} the following asymptotic expansions are valid \cite{18}:
\begin{equation} \label{B1}
J_\nu(nx) \sim
{\exp[\nu(\tanh\alpha\!-\!\alpha)] \over
\sqrt{2\pi\nu\tanh\alpha}}
\left[ 1\!+\! \sum_{k=1}^\infty
{u_k(\coth\alpha)\over\nu^k}\right]\! , \end{equation} \begin{eqnarray} \label{B2} \lefteqn{
Y_\nu(x) \sim
-{\exp[\nu(\beta\! -\!\tanh\beta)] \over
\sqrt{{1\over 2}\pi\nu\tanh\beta}} } \nonumber\\&&\hspace{10ex}\times\,
\left[ 1+ \sum_{k=1}^\infty (-1)^k
{u_k(\coth\beta)\over\nu^k}\right]\!, \end{eqnarray} \begin{eqnarray} \label{B3} \lefteqn{
J'_\nu(nx) \sim
\sqrt{{\sinh 2\alpha\over 4\pi\nu}} \,
\exp[\nu(\tanh\alpha-\alpha)] } \nonumber\\&&\hspace{10ex}\times\,
\left[ 1+ \sum_{k=1}^\infty
{v_k(\coth\alpha)\over\nu^k}\right]\!, \end{eqnarray} \begin{eqnarray} \label{B4} \lefteqn{
Y'_\nu(x) \sim
\sqrt{{\sinh 2\beta\over \pi\nu}}
e^{\nu(\beta -\tanh\beta)} } \nonumber\\&&\hspace{10ex}\times\, \left[ 1+ \sum_{k=1}^\infty (-1)^k
{v_k(\coth\beta)\over\nu^k}\right]\!, \end{eqnarray} [$Y_\nu(x)$ - Neumann function], where \begin{eqnarray} \label{B5} &\displaystyle
x={\omega\over c}R, \\&\displaystyle \label{B6}
\cosh\alpha = {\nu\over nx}, \quad
\cosh\beta = {\nu\over x} \,, \end{eqnarray} and $u_k$ and $v_k$ are given in Ref.~\cite{18}. To find the
leading terms in Eqs.~(\ref{e3}) and (\ref{e4}), we thus may write \begin{equation} \label{B7}
{J'_\nu(nx) \over J_\nu(nx) } \sim |\sinh \alpha|, \end{equation} and \begin{equation} \label{B8}
{H'_\nu(x) \over H_\nu(x) } \sim
{Y'_\nu(x) \over Y_\nu(x) } \sim - |\sinh\beta| \end{equation} to obtain, \begin{equation} \label{B9} \sqrt{\nu^2-x^2}+\sqrt{\nu^2-\epsilon x^2} = 0 \end{equation} for TE waves, and \begin{equation} \label{B10} \epsilon\sqrt{\nu^2-x^2}+\sqrt{\nu^2-\epsilon x^2} = 0 \end{equation} for TM waves. Here we have used relationships (\ref{B6}), and we have assumed that $x$ scales as $\nu$ to discard the last term in Eq. (\ref{e4}).
Obviously, Eq. (\ref{B9}) for TE waves has no solution, except for the trivial case of \mbox{$\epsilon$ $\!=$ $1$}. Equation (\ref{B10}) for TM waves can be rewritten as \begin{equation} \label{B11}
x=\nu\sqrt{1+\epsilon^{-1}}\,, \end{equation} which just implies condition (\ref{e11a}). The higher-order corrections can be obtained by writing \begin{equation} \label{B12}
x=\nu\sqrt{1+\epsilon^{-1}} \left[1+
\sum_{k=1}^\infty {c_k \over \nu^{k}}\right], \end{equation} expanding all the quantities in Eq.~(\ref{e4}) in powers of $\nu^{-1}$ and identifying the corresponding terms.
\section{Near-surface limit} \label{appC}
Using the asymptotic Bessel-function expansion \cite{18} \begin{eqnarray} \label{C1}
J_\nu(z) \sim {1\over \sqrt{2\pi\nu}}
\left(ez\over 2\nu\right)^\nu\!, \quad
Y_\nu(z) \sim -{2\over \sqrt{\pi\nu}}
\left(ez\over 2\nu\right)^{-\nu}\!, \end{eqnarray}
($|\nu|$ $\!\gg$ $\!1$), the coefficient ${\cal B}^N_l(\omega_A)$, Eq.~(\ref{A7}), can be given in the form of \begin{eqnarray} \label{C3} \lefteqn{
{\cal B}^N_l(\omega_A)
\left[ h^{(1)}_l({k_Ar_A}) \over {k_Ar_A} \right]^2 } \nonumber\\&&\hspace{2ex}
\sim {1\over i ({k_Ar_A})^3 (2l+1)}
{\epsilon(\omega_A)-1 \over \epsilon(\omega_A)+1}
\left(R\over r_A\right)^{2l+1} . \end{eqnarray} When the atom is located very close to the surface of the microsphere, i.e., \mbox{$r_A$ $\!\gtrsim$ $\!R$}, then from Eq.~(\ref{C3}) it follows that
the series in Eq.~(\ref{e17}) converges very slowly.
Hence, it is a good approximation to apply equation (\ref{C3}) to the terms with small $l$ as well. In this way, we derive \begin{eqnarray} \label{C4} \lefteqn{
\sum_{l=1}^\infty l(l+1)(2l+1)
{\cal B}^N_l(\omega_A)
\left[ h^{(1)}_l({k_Ar_A}) \over {k_Ar_A} \right]^2 } \nonumber\\&&\hspace{15ex}
\sim \,{1\over 4i}
{\epsilon(\omega_A)-1 \over \epsilon(\omega_A)+1}
{1\over (\Delta r)^3}\,. \end{eqnarray} Substitution of this expression into Eq.~(\ref{e17}) then yields the leading term in Eq.~(\ref{E21}). The leading term in Eq.~(\ref{E22}) can be derived in a similar fashion.
\begin{references} \bibitem[*]{byline} on leave from the Institute of Physics, National Center for Sciences and Technology, 1 Mac Dinh Chi Street, District 1, Ho Chi Minh city, Vietnam.
\bibitem{1} {\it Optical Processes in Microcavities}, edited by R. K. Chang and A. J. Campillo (World Scientific, Singapore, 1996).
\bibitem{2} L. Collot, V. Lef\`evre-Seguin, M. Brune, J. M. Raimond and S. Haroche, Euro. Phys. Lett. {\bf 23}, 327 (1993).
\bibitem{3} M. L. Gorodetsky, A. A. Savchenkov, and V. S. Ilchenko, Opt. Lett. {\bf 21}, 453 (1996).
\bibitem{4} D. W. Vernooy, V. S. Ilchenko, H. Mabuchi, E. W. Streed, and H. J. Kimble, Opt. Lett. {\bf 23}, 247 (1998).
\bibitem{5} S. Uetake, M. Katsuragawa, M. Suzuki, and K. Hakuta, Phys. Rev. {\bf 61}, 011803 (1999).
\bibitem{6} E. M. Purcell, Phys. Rev. {\bf 69}, 681 (1946).
\bibitem{9} H-B. Lin, J. D. Eversole, C. D. Merritt, and A. J. Campillo, Phys. Rev. A {\bf 45}, 6756 (1992). H. Fujiwara, K. Sasaki, and H. Masuhara, J. Appl. Phys. {\bf 85}, 2052 (1999); H. Yukawa, S. Arnold, and K. Miyano, Phys. Rev. A {\bf 60}, 2491 (1999).
\bibitem{10} D. W. Vernooy, A. Furusawa, N. Ph. Georgiades, V. S. Ilchenko, and H. J. Kimble, Phys. Rev. A {\bf 57}, R2293 (1998).
\bibitem{11} M. D. Barnes, C-Y. Kung, W. B. Whitten, J. M. Ramsey, S. Arnold, and S. Holler, Phys. Rev. Lett. {\bf 76}, 3931 (1996); N. Lermer, M. D. Barnes, C-Y. Kung, W. B. Whitten, J. M. Ramsey, and S. C. Hill, Opt. Lett. {\bf 23}, 951 (1998).
\bibitem{7} H. Chew, J. Chem. Phys. {\bf 87}, 1355 (1987).
\bibitem{8} V. V. Klimov, M. Ducloy, and V. S. Letokhov, J. Mod. Opt. {\bf 43}, 2251 (1996).
\bibitem{13} M. Pelton and Y. Yamamoto, Phys. Rev. A {\bf 59}, 2418 (1999); O. Benson and Y. Yamamoto, {\it ibid.} {\bf 59}, 4756 (1999).
\bibitem{12} S. Scheel, L. Kn\"{o}ll, and D.-G. Welsch, Phys. Rev. A {\bf 60}, 4094 (1999); Ho Trung Dung, L. Kn\"{o}ll, and D.-G. Welsch, {\it ibid.} {\bf 62}, 053804 (2000).
\bibitem{19a} L. Kn\"{o}ll, S. Scheel, and D.-G. Welsch, in {\it Coherence and Statistics of Photons and Atoms}, edited by J. Perina (John Wiley \& Son, New York, to be published), e-print quant-ph/0006121.
\bibitem{13a} Ho Trung Dung, L. Kn\"oll, and D.-G. Welsch, Phys. Rev. A {\bf 57}, 3931 (1998); S. Scheel, L. Kn\"oll, and D.-G. Welsch, {\it ibid.} {\bf 58}, 700 (1998).
\bibitem{13b} V. V. Klimov, M. Ducloy, and V. S. Letokhov, Phys. Rev. A, {\bf 59}, 2996 (1999).
\bibitem{14} S. Schiller and R. L. Byer, Opt. Lett. {\bf 16}, 1138 (1991); C. C. Lam, P. T. Leung, and K. Young, J. Opt. Soc. Am. B {\bf 9}, 1585 (1992).
\bibitem{15} S. Arnold and L. M. Folan, Opt. Lett. {\bf 14}, 387 (1989).
\bibitem{16} P. Ch\'ylek, H-B. Lin, J. D. Eversole, and A. J. Campillo, Opt. Lett. {\bf 16}, 1723 (1991).
\bibitem{17a} H. Raether, {\it Surface Plasmons on Smooth and Rough Surfaces and on Gratings} (Springer-Verlag, Berlin, 1988).
\bibitem{20} P. W. Milonni, {\it The Quantum Vacuum: An Introduction to Quantum Electrodynamics} (Academic, San Diego, 1994).
\bibitem{23} M. S. Yeung and T. K. Gustafson, Phys. Rev. A {\bf 54}, 5227 (1996).
\bibitem{24} S. Scheel, L. Kn\"{o}ll, and D.-G. Welsch, Acta Phys. Slov. {\bf 49}, 585 (1999).
\bibitem{24a} S. C. Ching, H. M. Lai, and K. Young, J. Opt. Soc. Am. B {\bf 4}, 2004 (1987).
\bibitem{25} {\it Higher Transcendental Functions}, edited by A. Erd\'elyi (McGraw-Hill, New York, 1953).
\bibitem{18} {\it Handbook of Mathematical Functions}, edited by A. Abramovitz and I. A. Stegun (Dover, New York, 1973).
\bibitem{17b} D. J. Nash and J. R. Sambles, J. Mod. Opt. {\bf 43}, 81 (1996).
\bibitem{17c} R. Ruppin, J. Chem. Phys. {\bf 76}, 1681 (1982).
\bibitem{28} K. H. Drexhage, {\it Progress in Optics XII}, edited by E. Wolf (North-Holland, Amsterdam, 1974), p. 165.
\bibitem{26} L. W. Li, P. S. Kooi, M. S. Leong, and T. S. Yeo, IEEE Trans. Microwave Theory Tech. {\bf 42}, 2302 (1994).
\end{references} \end{multicols}
\end{document} | arXiv |
\begin{document}
\title{Quantum Team Logic and Bell's Inequalities}
\thanks{The research of the second author was supported by the Finnish Academy of Science and Letters (Vilho, Yrj\"o and Kalle V\"ais\"al\"a foundation) and a grant TM-13-8847 of CIMO. The research of the third author was partially supported by grant 251557 of the Academy of Finland.}
\author{Tapani Hyttinen} \address{Department of Mathematics and Statistics, University of Helsinki, Finland}
\author{Gianluca Paolini} \address{Department of Mathematics and Statistics, University of Helsinki, Finland}
\author{Jouko V\"a\"an\"anen} \address{Department of Mathematics and Statistics, University of Helsinki, Finland and Institute for Logic, Language and Computation, University of Amsterdam, The Netherlands}
\date{} \maketitle
\begin{abstract}
A logical approach to Bell's Inequalities of quantum mechanics has been introduced by Abramsky and Hardy \cite{abramsky}. We point out that the logical Bell's Inequalities of \cite{abramsky} are provable in the probability logic of Fagin, Halpern and Megiddo \cite{fagin}. Since it is now considered empirically established that quantum mechanics violates Bell's Inequalities, we introduce a modified probability logic, that we call quantum team logic, in which Bell's Inequalities are not provable, and prove a Completeness Theorem for this logic. For this end we generalise the team semantics of dependence logic \cite{MR2351449} first to probabilistic team semantics, and then to what we call quantum team semantics.\end{abstract}
\section{Introduction}
Quantum logic was introduced by Birkhoff and von Neumann \cite{Birkhoff} to account for non-classical phenomena in quantum physics. Several other formulations have been suggested since. Starting from probabilistic propositional logic, which unsurprisingly turns out to be inadequate for quantum physics, we introduce here a new propositional logic, called {\em quantum team logic}. The idea is to take advantage of some features of {\em team semantics} \cite{MR2351449} in order to model phenomena of quantum physics such as non-locality and entanglement. These phenomena were first emphasised by the famous paper of Einstein-Podolsky-Rosen \cite{Einstein}, and then more conclusively by a result of J. S. Bell \cite{Bell}, known as Bell's Theorem.
In classical propositional logic the meaning of a sentence can be defined in terms of truth-value assignments to the proposition symbols. In so-called {\em team semantics} of \cite{MR2351449}, the meaning of a sentence is defined in terms of sets of truth-value assignments, called {\em teams}. The advantage of this switch is that it becomes possible to define the meaning of a proposition symbol {\em depending on} or being {\em independent of} another proposition symbol. In this paper we do not discuss dependence or independence, but instead use team semantics to investigate the related concept of correlation of truth-values of proposition symbols in teams of assignments. In particular we use team semantics to define two different propositional logics. The first is a propositional logic adequate for reasoning about expected truth values of propositional formulas. We show that Bell's Inequalities are provable in this logic. We then introduce another similar logic, quantum team logic, and show that the kind of Bell's Inequalities that can be violated, is not provable. The situation is a manifestation in logical terms of the recognised fact that assigning probabilities to observables is not enough to explain correlations of entangled particles. Both of our logics extend and are based on \cite{fagin}. Our approach is inspired by \cite{abramsky}.
An essential feature of quantum phenomena is that they are probabilistic. It is therefore natural in any attempt to model quantum physics by propositional logic to allow probabilistic truth-values. We accomplish this by considering {\em multi-teams}, that is, teams in which truth-value assignments occur with certain probabilities. This is our first step, and we call the resulting logic {\em probabilistic team logic}.
Multi-teams can be seen as results of experiments (the fact that the elements of the team take values from $\{0,1\}$ is not essential). E.g. someone throws a bowling ball at a rack of four pins. The result can be described by a function $f:\{0,1,2,3\}\to\{0,1\}$ ($f(i) = 1$ if pin $i$ is knocked down). When the experiment is repeated several times the results form a team. From this team one can calculate e.g. the probability of the event that either both pins $0$ and $1$ are knocked down or neither of them is knocked down. In our propositional logic this is the same as the (expected) truth value of the propositional formula $p_0 \leftrightarrow p_1$ in the multi-team.
The physical observations violating Bell's Inequalities, as well as quantum theoretic computations to the same effect, show that correlations between observations concerning entangled particles are stronger than can be explained by probabilities of individual (even hidden) variables. This leads us to define the more general concept of {\em quantum team}. Every multi-team is a quantum team but not conversely. Experiments demonstrating the violation of Bell's Inequalities give practical examples of quantum teams. Not all quantum teams correspond to quantum mechanical experiments because even so-called maximal violations of Bell's Inequalities can be manifested by quantum teams.
By giving the meaning of propositional symbols in terms of quantum teams we define {\em quantum team logic} and show that cases of the violation of Bell's Inequalities are simply examples of sentences of quantum team logic that are not valid. We give a proof system for our quantum team logic, based on \cite{fagin}, and prove a Completeness Theorem. We propose that our quantum team logic formalises probabilistic reasoning in quantum physics in perfect harmony with the non-locality phenomenon revealed by Bell's Inequalities. However, since quantum teams cover more than quantum mechanics, our quantum team logic does not formalize {\em exactly} the reasoning in quantum physics. Finding the logic for exact reasoning is left as an open problem.
\section{Notation}
We use $\omega$ to denote the set of natural numbers, $\omega^*$ to denote $\omega - \left\{ 0 \right\}$, and $\mathcal{P}_{\omega}({\omega})$ to denote the set of non-empty finite subsets of $\omega$. We use $p_0, p_1, \ldots$ to denote proposition symbols. For a proposition symbol $p_i$ and $d\in 2$ we use $p_i^d$ to denote $p_i$, if $d=1$, and $\neg p_i$, if $d=0$. We use the notation $( a_i )_{i < n}$
for a sequence of $n$ elements $a_i$.
\section{Multi-teams}\label{prob_team}
A good source of teams for our purpose is the following Alice-Bob experiment:
\begin{itemize} \item Alice has two registers $A_1$ and $A_2$ which both can contain a binary digit. \item Bob has two registers $B_1$ and $B_2$ which both can contain a binary digit. \item The experiment consists of Alice and Bob both choosing one of their registers and reading the content, resulting in a tuple $(x_1,y_1,x_2,y_2)$, where $x_1\in\{A_1,A_2\}$, $y_1\in\{0,1\}$, $x_2\in\{B_1,B_2\}$ and $y_2\in\{0,1\}$.
\end{itemize}
Each result $(x_1,y_1,x_2,y_2)$ of the Alice-Bob experiment can be thought of as an assignment of truth-values to proposition symbols $p_0,\ldots,p_3$ with the intention:
$$ \begin{array}{lcclcl} p_0& \mbox{ is true} &\mbox{ iff }& \mbox{ Alice chose $A_1$}&\mbox{ i.e.}& \mbox{ $x_1=A_1$}\\ p_1& \mbox{ is true} &\mbox{ iff }& \mbox{ Alice read $1$}&\mbox{ i.e.}& \mbox{ $y_1=1$}\\ p_2& \mbox{ is true} &\mbox{ iff }& \mbox{ Bob chose $B_1$}&\mbox{ i.e.}& \mbox{ $x_2=B_1$}\\ p_3& \mbox{ is true} &\mbox{ iff }& \mbox{ Bob read $1$}&\mbox{ i.e.}& \mbox{ $y_2=1$}\\ \end{array} $$
A possible set of assignments arising in this way is in Table~\ref{team1}. Note that the table has repeated rows, so we cannot identify the table with the {\em set} of the assignments constituting the table without losing some information. On the other hand, teams are {\em sets} of assignments. Thus Figure~\ref{team1} does not represent a team in the sense of \cite{MR2351449}, but rather a {\em multi-team}.
\begin{figure}
\caption{Example of a multi-team }
\label{team1}
\end{figure}
\def\mbox{dom}{\mbox{dom}} \begin{definition}[Multi-team] A {\em {multi-team}} is a pair $X=(\Omega,\tau)$, where $\Omega$ is a non-empty set and $\tau$ is a function such that
$\mbox{dom}(\tau)=\Omega$ and if $i\in \Omega$, then $\tau(i)$ is an assignment for one and the same non-empty set of proposition symbols, denoted by $\mbox{dom}(X)$. The {\em size} of the multi-team is the cardinality $|\Omega|$ of $\Omega$. \end{definition}
Note, that an {ordinary team} $X$, i.e. a set of assignments, can be thought of as the {multi-team} $(\Omega,\tau)$, where $X=\Omega$ and $\tau(i)=i$ for all $i\in X$.
A finite multi-team of size $n$ gives rise to the concept of a probability of an individual assignment:
$$P_X(v)=\frac{|\{i\in \Omega : \tau(i)=v\}|}{n}.$$ This extends canonically to a definition of the probability (or expected value) of a propositional formula $\phi$:
\[ [\phi]_X = P(\left\{ i \in \Omega \, | \, \tau(i) (\phi) = 1 \right\}) .\] In fact, a finite multi-team is just a finite ordinary team $X$ endowed with a probability distribution on $X$. For infinite multi-teams the situation is a little different and calls for a new definition:
\begin{definition}[Probability team] A {\em {probability team}} is a tuple $(\Omega,\mathcal{F},P,\tau)$, where $\Omega$ is a set, $\mathcal{F}$ is a $\sigma$-algebra on $\Omega$, $P$ is a probability measure on $(\Omega,\mathcal{F})$ and $\tau$ is a measurable function such that
$\mbox{dom}(\tau)=\Omega$ and if $i\in \Omega$, then $\tau(i)$ is an assignment for one and the same set of proposition symbols, denoted by $\mbox{dom}(X)$. \end{definition}
In this paper the main focus is on finite teams.
Suppose now $X = (\Omega,\tau)$ is a finite multi-team of size $n$ and $U\subseteq \mbox{dom}(X)$. We can define a new multi-team $(\Omega,\tau_U)$ by letting $\tau_U(i)=\tau(i)\restriction U$. For each assignment $v$ on $U$ we define
$$P_{X,U}(v)=\frac{|\{i\in \Omega : \tau_U(i)=v\}|}{n}.$$We write $P_{U}$ when $X$ is clear from the context. We can now make a table of the values $P_U(v)$ for various $U$ and $v$.
For the multi-team of Figure~\ref{team1} and for $U=\{p_0,p_1\}$ we get Table~\ref{prob_table0}. We have denoted the four possible assignments for $\{p_0,p_1\}$ as $(1,1),(0,1),(1,0)$ and $(0,0)$ with the obvious meaning. \begin{table}[h]
$$\begin{array}{|c|c|c|c|c|} \hline \phantom{a} & (1, 1) & (0,1) & (1,0) & (0,0) \\ \hline
\{p_0, p_1\} & 1/2 & 0 & 0 & 1/2 \\ \hline \end{array} $$\caption{Example of a probability table \label{prob_table0}}\end{table}
It is relevant from the point of view of multi-teams arising is quantum physical experiments to consider a whole collection of subsets $U$ of $\mbox{dom}(X)$ at the same time. We call a collection $\mathcal{U}=\{U_j : j\in J\}$ of subsets of $\mbox{dom}(X)$ a {\em cover of $B$} if $\bigcup_{j\in J}U_j=B$. For two collections $\mathcal{U}$ and $\mathcal{U}'$ of sets we define $$\mathcal{U}\le\mathcal{U}'\iff\forall U\in\mathcal{U}\exists U'\in\mathcal{U}'(U\subseteq U').$$
\begin{definition}[Probability Table \cite{abramsky}\footnote{In \cite{abramsky} what we call probability tables are referred to as {\em probability models}.}]\label{def_prob_table} Suppose $B$ is a finite set of proposition symbols and $\mathcal{U}$ a cover of $B$. A {\em probability table for $B$ and $\mathcal{U}$} is a function $U\mapsto d_U$ on $\mathcal{U}$, where $d_U$ is a probability distribution on the possible truth-value assignments $s$ for the proposition symbols in $U$ (i.e. $d_U(s)\in[0,1]$ and $\Sigma_{s}d_U(s)=1$).
\end{definition}
When each set $U$ in $\mathcal{U}$ has the same size $k$, the probability table is particularly easy to draw as a matrix as we can fix the truth-value assignments by reinterpreting $\{p_{i_0},\ldots,p_{i_{k-1}}\}$, where $i_0<\ldots<i_{k-1}$, as $\{p_0,\ldots,p_{k-1}\}$. With this convention all $U$ have the same truth-value assignments $v_0,\ldots,v_l$. See Figure~\ref{protab5}.
\begin{figure}
\caption{Probability table}
\label{protab5}
\end{figure}
\begin{definition} If $X=(\Omega,\tau)$ is a finite multi-team of size $n$ and $\mathcal{U}$ is a cover of $B\subseteq\mbox{dom}(X)$, then the {\em associated probability table for $B$ and $\mathcal{U}$} is the function $U\mapsto d_U$ on $\mathcal{U}$, where $d_U$ is the probability distribution $$d_U(v)= P_{X,U}(v)$$ on the possible truth-value assignments for the proposition symbols in $U$. \end{definition}
In Table~\ref{prob_table1} we have an example of a probability table for
$\{p_0,p_1,p_2,p_3\}$ and $\mathcal{U} = \left\{ \left\{ 0, 1 \right\}, \left\{ 0, 3 \right\}, \left\{ 1, 2 \right\}, \left\{ 2, 3 \right\} \right\}$, associated with the multi-team of Figure~\ref{team1}.
\begin{table}[h]
$$\begin{array}{|c|c|c|c|c|} \hline \phantom{a} & (1, 1) & (0,1) & (1,0) & (0,0) \\ \hline
(p_0, p_1) & 1/2 & 0 & 0 & 1/2 \\
(p_0, p_3) & 3/8 & 1/8 & 1/8 & 3/8 \\
(p_1, p_2) & 3/8 & 1/8 & 1/8 & 3/8 \\
(p_2, p_3) & 3/8 & 1/8 & 1/8 & 3/8 \\ \hline \end{array} $$\caption{Probability table \label{prob_table1} associated with Figure~\ref{team1}}\end{table}
\section{Logical Bell Inequalities}
John Stewart Bell showed in 1964 that spins of a pair of entangled particles manifest correlations which cannot be explained by associating probabilities to spins of the individual particles in different directions, even if so-called ``local" hidden variables are allowed. Bell used the mathematical model of quantum mechanics for his result but the correlations in question have subsequently been verified by experiments. Bell's result is usually interpreted as a strong non-locality of the physical world. On the other hand, this non-locality has given rise to quantum cryptography and more generally to quantum information theory.
Abramsky and Hardy \cite{abramsky} presents a very logical formulation of Bell's result and we follow his presentation in this overview section. We present some details for completeness and refer the reader to \cite{abramsky} for further details.
The probability table we use for deriving Bell's Theorem is in in Table~\ref{prob_table2}.
\begin{table}[h]
$$\begin{array}{|c|c|c|c|c|}
\hline
\phantom{a} & (1, 1) & (0,1) & (1,0) & (0,0) \\
\hline
(p_0, p_1) & 1/2 & 0 & 0 & 1/2 \\
(p_0, p_3) & 3/8 & 1/8 & 1/8 & 3/8 \\
(p_1, p_2) & 3/8 & 1/8 & 1/8 & 3/8 \\
(p_2, p_3) & 1/8 & 3/8 & 3/8 & 1/8 \\
\hline
\end{array}
$$\caption{Bell's table \label{prob_table2}}\end{table}
Consider the Alice-Bob experiment mentioned in the introduction. Let us enrich the framework by imagining that a pair of (entangled) particles are sent to Alice and Bob. Let us decide that what we called Alice's register $A_1$ is actually a measurement of the spin of the particle that Alice has in direction $0^{\circ}$, Alice's register $A_2$ is a measurement of the spin of the particle that Alice has in direction $60^{\circ}$, Bob's register $B_1$ is actually a measurement of the spin of the particle that Bob has in direction $180^{\circ}$, and finally Bob's register $B_2$ is a measurement of the spin of the particle that Bob has in direction $120^{\circ}$.
Let us denote\footnote{This is different choice than before in Section~\ref{prob_team}.}
$$\begin{array}{lcl}
p_0 &=& \text{``Alice measurement at $0^{\circ}$ has outcome $\uparrow$."},\\
p_1 &=& \text{``Bob measurement at $180^{\circ}$ has outcome $\uparrow$."},\\
p_2 &=& \text{``Alice measurement at $60^{\circ}$ has outcome $\uparrow$."},\\
p_3 &=& \text{``Bob measurement at $120^{\circ}$ has outcome $\uparrow$."},
\end{array}$$
Both quantum physical computations and actual experiments show that Table~\ref{prob_table2} is the resulting probability table. However, Table~\ref{prob_table2} is not the probability table associated with any multi-team.
We give the proof, as presented in \cite{abramsky}, for completeness. The method of \cite{abramsky} is based on observations about expected values of propositional formulas.
For this end, suppose $X=(\Omega,\tau)$ a multi-team the domain of which contains the proposition symbols of some given propositional formulas $(\phi_j)_{j < k}$. Then
\[ \begin{array}{rcl}
1 - [\bigwedge_{j < k} \phi_j]_X & = & [\bigvee_{j < k}\neg\phi_j]_X \\
& = & P (\mbox{\Large $\{$ } \!\! i \in X \, | \, \tau(i)(\bigvee_{j < k}\neg\phi_j) = 1 \mbox{\Large $\}$}) \\
& = & P (\bigcup_{j < k} \left\{ i \in X \, | \, \tau(i)(\neg\phi_j) = 1 \right\}) \\
& \leqslant & \sum_{j < k} P(\left\{ i \in X \, | \, \tau(i)(\neg \phi_j) = 1 \right\}) \\
& = & \sum_{j < k} [\neg \phi_j]_X \\
& = & \sum_{j < k} (1 - [\phi_j]_X) \\
& = & k - \sum_{j < k} [\phi_j]_X. \\ \end{array} \] Hence
\begin{equation}\label{star}
\sum_{j < k} [\phi_j]_X \leqslant k-1 + [\bigwedge_{j < k} \phi_j]_X.
\end{equation}
Furthermore if the formula $\bigwedge_{j < k} \phi_j$ is contradictory (in the sense of propositional logic), then $[\bigwedge_{j < k} \phi_j]_X =0$. Thus, the inequality (\ref{star}) becomes
\begin{equation}\label{starstar} \sum_{j < k} [\phi_j]_X \leqslant k-1. \end{equation}
Inequalities of this form (\ref{starstar}) are of great importance in foundations of quantum mechanics. In \cite{abramsky} they are called {\em logical Bell's inequalities}.
Suppose now that the probability table represented in Table~\ref{prob_table2} arises from a multi-team. That is, there is a
multi-team $X=(\Omega,\tau)$ with $\{p_0,p_1,p_2,p_3\}\subseteq \mbox{dom}(X)$ such that
Table~\ref{prob_table2}
is the associated probability table for $\{p_0,p_1,p_2,p_3\}$ and $\mathcal{U}$.
Consider now the following propositional formulas:
$$\begin{array}{lcl}
\phi_0 &=& (p_0 \wedge p_1) \vee (\neg p_0 \wedge \neg p_1)\\
\phi_1 &=& (p_0 \wedge p_3) \vee (\neg p_0 \wedge \neg p_3)\\
\phi_2 &=& (p_1 \wedge p_2) \vee (\neg p_1 \wedge \neg p_2)\\
\phi_3 &=& (\neg p_2 \wedge p_3) \vee (p_2 \wedge \neg p_3)\\ \end{array}$$
Looking at Table~\ref{prob_table2} it is easy to notice that $[\phi_0]_X = 1$ and $[\phi_j]_X = \frac{6}{8}$ for $j = 1, 2, 3$. Furthermore, the formula $\bigwedge_{j < 4} \phi_j$ is clearly contradictory. But then by (\ref{starstar}) we must have that
\[ \sum_{j < 4}[\phi_j]_X = 1 + 3\cdot\frac{6}{8} = 3 + \frac{1}{4} \leqslant 3, \] a contradiction.
Thus, Table~\ref{prob_table2} can not arise from a multi-team, because it violates the inequality (\ref{starstar}) by $\frac{1}{4}$. One consequence of this, when combined with existing actual measurements, is the remarkable result that
the polarization of a photon cannot be independently measured
in two different directions simultaneously.
It is possible to construct probability tables consistent with quantum mechanics that violate (\ref{starstar}) by $1$, and so achieve {\em maximal} violation of the inequality (remember that the probability of a formula can not be greater than $1$). Tables~\ref{popescu_box} and \ref{GHZ_state} are emblematic examples of this. In \cite{abramsky} and \cite{7961} a general theory of probability tables (and generalizations thereof) is developed. A notion of {\em global section} is introduced and a strict hierarchy of classes of tables is defined: non-local tables, contextual tables and strongly contextual tables. As shown there, the first class corresponds exactly to the family of tables which violate a logical Bell's Inequality, while the third to the family of tables which maximally violate a logical Bell's Inequality.
\begin{table}[h]
$$\begin{array}{|c|c|c|c|c|} \hline \phantom{a} & (1, 1) & (0,1) & (1,0) & (0,0) \\ \hline
(p_0, p_1) & 1/2 & 0 & 0 & 1/2 \\
(p_0, p_3) & 1/2 & 0 & 0 & 1/2 \\
(p_1, p_2) & 1/2 & 0 & 0 & 1/2 \\
(p_2, p_3) & 0 & 1/2 & 1/2 & 0 \\ \hline \end{array} $$\caption{Popescu-Rohrlich box (cfr. \cite{abramsky}) \label{popescu_box}}\end{table}
\begin{table}[h]
$$\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline \phantom{a} & 111 & 110 & 101 & 100 & 011 & 010 & 001 & 000 \\ \hline
(p_0, p_1, p_4) & 0 & 1/4 & 1/4 & 0 & 1/4 & 0 & 0 & 1/4 \\
(p_0, p_1, p_5) & 1/4 & 0 & 0 & 1/4 & 0 & 1/4 & 1/4 & 0 \\
(p_0, p_1, p_5) & 1/4 & 0 & 0 & 1/4 & 0 & 1/4 & 1/4 & 0 \\
(p_0, p_1, p_4) & 1/4 & 0 & 0 & 1/4 & 0 & 1/4 & 1/4 & 0 \\ \hline \end{array} $$\caption{$\mathrm{GHZ}$ state (cfr. \cite{abramsky}) \label{GHZ_state}}\end{table}
\section{Quantum Teams}\label{sec_quantum_teams}
A quantum team is a multi-team in which some values are indeterminate, reflecting the situation in quantum phenomena that some variables cannot be measured together. In the quantum theoretic Alice-Bob experiment the truth-values of propositions $p_0$ and $p_2$ (also $p_1$ and $p_3$) cannot be both determined. We isolate this phenomenon by specifying a sequence $\mathcal{Q}=(Q_i)_{i< m}$ of finite sets of proposition symbols. Intuitively, each $Q_i$ is a set of elementary propositions that {\em can} be measured together.
\begin{definition}[Quantum team]\label{quantum_team} Suppose $\Omega$ is a finite set. Let $\mathcal{Q}=(Q_i)_{i\in\Omega}$ be a sequence of finite non-empty sets of proposition symbols. A {\em quantum team} on $\mathcal{Q}$ is a pair $X=(\Omega,\tau)$ such that $\tau(i)$ is a truth-value assignment to the proposition symbols in $Q_i$ for each $i\in \Omega$. We call $\{Q_i:i\in\Omega\}$ the {\em support} of $X$ and denote it $\mathrm{Sp}(X)$. The set $\bigcup_{i\in\Omega}Q_i$ is called the {\em domain} of $X$ and denoted $\mbox{dom}(X)$.
\end{definition}
Note that a multi-team is always a quantum team as we can let $Q_i=\mbox{dom}(X)$ for all $i\in \Omega$. On the other hand, obviously a quantum team need not be a multi-team.
If $X=(\Omega,\tau)$ is a quantum team and $j\in \mbox{dom}(X)\setminus Q_i$, then $\tau(i)(j)$ is not determined and we call it {\em indeterminate}. Indetermined values arise in quantum physics naturally. For example, a particle has a spin in every direction, but once it is measured in one direction, spin in other directions cannot be measured independently. In graphical representations of teams we represent indeterminate values using the symbol $-$.
To make clear this convention we give an example of quantum team. Let
\[ Q_i = \begin{cases} \left\{ 0, 1 \right\} &\mbox{if } i < 8 \\
\left\{ 0, 3 \right\} &\mbox{if } 8 \leqslant i < 16 \\
\left\{ 1, 2 \right\} &\mbox{if } 16 \leqslant i < 24 \\
\left\{ 2, 3 \right\} &\mbox{if } 24 \leqslant i < 32, \end{cases} \]
then Figure~\ref{team2} depicts a $(Q_i)_{i< 32}$-quantum team.
{\small \begin{figure}
\caption{Example of a quantum team }
\label{team2}
\end{figure} }
Given a finite set $U$ of proposition symbols and a quantum team $(\Omega,\tau)$ on $(Q_i)_{i\in\Omega}$, we let $\Omega_{U} = \left\{ i \in\Omega \, | \, U \subseteq Q_i \right\}$. We use this notation only if $\Omega_U\ne\emptyset$.
Suppose $X=(\Omega,\tau)$ is a quantum team on $(Q_i)_{i\in\Omega}$ and $\{U\}\le\mathrm{Sp}(X)$. We can define a new quantum team $X_U=(\Omega_U,\tau_U)$ by letting $\tau_U(i)=\tau(i)\restriction U$ for $i\in \Omega_U$. For each assignment $v$ on $U$ we define
$$P_{X,U}(v)=\frac{|\{i\in \Omega_U : \tau_U(i)=v\}|}{|\Omega_U|}.$$ We write $P_{U}$ when $X$ is clear from the context. This extends canonically to a definition of the probability of a propositional formula $\phi$ with its proposition symbols in $U$ such that $\Omega_{U} \neq \emptyset$:
\[ [\phi]_{X,U} = P_{X,{U}}(\left\{ i \in \Omega_{U} \, | \, \tau_U(i) (\phi) = 1 \right\}) .\]
If $U = \mathrm{Var}(\phi)$ we simply write $[\phi]_{X}$, instead of $[\phi]_{X,\mathrm{Var}(\phi)}$.
\begin{definition} Suppose we have a quantum team $X=(\Omega,\tau)$ on $\mathcal{Q}$, a set $B\subseteq\mbox{dom}(X)$ and a cover $\mathcal{U}$ of $B$ such that $\mathcal{U}\le\mathcal{Q}$. The {\em associated probability table for $B$ and $\mathcal{U}$} is the following
function $U\mapsto d_U$ on $\mathcal{U}$: $$d_U(v)= P_{X,U}(v).$$
\end{definition}
A moment's reflection shows that Bell's table (i.e. Table~\ref{prob_table2}) is the probability table associated with the team represented in Figure~\ref{team2}, and $A$ and $\mathcal{U}$ as in the description of Table~\ref{prob_table2}. Similarly, it is possible to see that the Popescu-Rohrlich box (i.e. Table~\ref{popescu_box}) and the $\mathrm{GHZ}$ state (i.e. Table~\ref{GHZ_state}) arise from quantum teams\footnote{In the case of the Popescu-Rohrlich box just modify Table~\ref{prob_table2} changing the first non-indeterminate entry in lines 11, 12, 19, 20, 24 and 31.}.
\begin{lemma}
Every probability table with rational probabilities is the associated table of some quantum team. \end{lemma}
\begin{proof} Suppose $\mathcal{U} = ( U_i )_{i < n}$ is a cover of a set $B$ of proposition symbols. Let $(d_{U_i})_{i <n}$ be a probability table for $ \mathcal{U}$ and $B$ with rational values.
Let $s^k_t$, $0\le t<2^{|U_i|}$ list all truth assignments for proposition symbols in $U_i$.
For $i < n$, let $a^i_t\in\omega$ and $b_i\in\omega^*$ be such that $d_{{U}_i}(s^i_t) = {a^i_t}/{b_i}$ for $t<2^{|U_i|}$. Let $m = \sum_{i < n} b_i$.
For $\sum_{i < k} b_i \leqslant j < \sum_{i < k+1} b_i$, let $Q_j = U_k$. Let $X=(\Omega,\tau)$ be the quantum team on $(Q_j)_{j < m}$ such that $X=m$ and for $\sum_{i < k} b_i \leqslant j < \sum_{i < k+1} b_i$ and $\sum_{p < t} a^k_p \leqslant j - \sum_{i < k} b_i < \sum_{p < t + 1} a^k_p$ we have that $\tau(j) = s^k_t$. Then $X$ is as desired. \end{proof}
\section{Probabilistic team logic}
As observed in \cite{abramsky}, Bell's Inequalities can be expressed in terms of expected values of simple propositional formulas. We introduce now a version of propositional logic in which Bell's Inequalities can be expressed and proved. Our approach is based on \cite{fagin}. Since experiments, as well as theoretical computations, violate Bell's Inequalities, our probabilistic team logic is not appropriate for quantum physics. In the next section we present a new logic, quantum team logic, in which the ``false" Bell's Inequalities are not provable, and which therefore has a better chance to model adequately the logic of quantum phenomena.
Following \cite{fagin}, we formulate a logic that is capable of expressing rational inequalities. The syntax and deductive system of this logic are the same as those of \cite{fagin}. The semantics is different, but equiexpressive with the original one, as we shall see. We call this logic {\em probabilistic team logic} $(\mathrm{PTL})$. Paradigm examples of formulas of $\mathrm{PTL}$ are formulas that we write as
$$\phi_0+\ldots + \phi_{k-1} \leqslant k-1,$$ expressing
the logical Bell's Inequality
$$[\phi_0]_X+\ldots + [\phi_{k-1}]_X \leqslant k-1.$$
\begin{definition} Suppose $\phi_0,\ldots,\phi_k$ are propositional formulas, $(a_j)_{j \le k} \in \mathbb{Z}^k$ and $c \in \mathbb{Z}$, then $$a_0 \phi_0+\ldots +a_{k} \phi_{k} \geqslant c$$ is an atomic formula of $\mathrm{PTL}$. \end{definition}
\begin{definition}\label{def_form} The set of formulas of $\mathrm{PTL}$ is defined as follows:
\begin{itemize}
\item Atomic formulas are formulas;
\item If $\alpha$ is a formula, then $\neg\alpha$ is a formula;
\item If $\alpha$ and $\beta$ are formulas, then $\alpha \wedge \beta$ is a formula.
\end{itemize} \end{definition}
Disjunction and implication are defined in terms of negation and conjunction in the usual manner. We shall use obvious abbreviations, such as $\phi -\psi \geqslant c$ for $\phi + (-1)\psi \geqslant c$, $\phi \geqslant \psi$ for $\phi - \psi \geqslant 0$, $\phi \leqslant c$ for $-\phi \geqslant -c$, $\phi < c$ for $\neg (\phi \geqslant c)$, $\phi = c$ for $(\phi \geqslant c) \wedge (\phi \leqslant c)$ and $\phi = \psi$ for $(\phi \geqslant \psi) \wedge (\phi \leqslant \psi)$. A formula such as $\phi \geqslant \frac{1}{3}$ can be viewed as an abbreviation for $3\phi \geqslant 1$; we can always allow rational numbers in our formulas as abbreviations for the formula that would be obtained by clearing the denominators.
\begin{definition}[Semantics] Suppose $X=(\Omega,\tau)$ is a multi-team and $\alpha$ a formula of $\mathrm{PTL}$ with propositional symbols in $\mbox{dom}(X)$. We define by induction on $\alpha$ the relation $X\models \alpha$ in the following way:
\begin{itemize}
\item $X \models a_0 \phi_0+\ldots + a_{k-1} \phi_{k-1} \geqslant c$ iff $a_0 [\phi_0]_X+\ldots +a_{k-1} [\phi_{k-1}]_X \geqslant c$;
\item $X \models \neg\alpha$ iff $X \not\models \alpha$;
\item $X \models \alpha \wedge \beta$ iff $X \models \alpha$ and $X \models \beta$.
\end{itemize} \end{definition}
We say that $\alpha$ is {\em satisfiable} if there is a multi-team $X$ such that $X \models \alpha$, and that $\alpha$ is valid, in symbols $\models \alpha$, if $X \models \alpha$ for every multi-team $X$. Notice that the arguments presented in Section~\ref{prob_team} show that for any sequence of propositional formulas $(\phi_0, ..., \phi_{k-1})$ such that $\bigwedge_{j < k} \phi_j$ is unsatisfiable we have that the formula
\[ \sum_{j < k}\phi_j \leqslant k-1\] is a validity of $\mathrm{PTL}$. In particular, for $\phi_0, \phi_1, \phi_2$ and $\phi_3$ as in Section~\ref{prob_team} we have that the formula \begin{equation}\label{fs}
\phi_0+\phi_1+\phi_2+\phi_3 \leqslant 3
\end{equation} is a validity of $\mathrm{PTL}$.
\begin{definition}[Deductive system]\label{ded_system1} The deductive system of $\mathrm{PTL}$ breaks into the following three sets of rules.
\[ \text{Propositional reasoning}\]
\begin{enumerate}[A)]
\item All instances of propositional tautologies.
\item If $\alpha \rightarrow \beta$ and $\alpha$, then $\beta$ (modus ponens).
\end{enumerate}
\[ \text{Probabilistic reasoning}\]
\begin{enumerate}[A)]\setcounter{enumi}{2}
\item $\phi \geqslant 0$.
\item $\phi \vee \neg \phi = 1$.
\item $\phi \wedge \psi + \phi \wedge \neg \psi = \phi$ (additivity).
\item If $\phi \equiv \psi$ in propositional logic, then $\phi = \psi$.
\end{enumerate}
\[ \text{Linear inequalities}\]
\begin{enumerate}[A)]\setcounter{enumi}{6}
\item $\phi \geqslant \phi$.
\item $\sum_{j < k} a_j \phi_j \geqslant c$ $\Leftrightarrow$ $\sum_{j < k} a_j \phi_j + 0 \psi \geqslant c$.
\item $\sum_{j < k} a_j \phi_j \geqslant c$ $\Leftrightarrow$ $\sum_{j < k} a_{\sigma(j)} \phi_{\sigma(j)} \geqslant c$ (for $\sigma$ permutation on $k$).
\item $\sum_{j < k} a_j \phi_j \geqslant c \wedge \sum_{j < k} b_j \phi_j \geqslant d$ $\Leftrightarrow$ $\sum_{j < k} (a_j + b_j) \phi_j \geqslant c + d$.
\item $\sum_{j < k} a_j \phi_j \geqslant c$ $\Leftrightarrow$ $\sum_{j < k} da_j \phi_j \geqslant c$ (for $d >0$).
\item $\sum_{j < k} a_j \phi_j \geqslant c \vee \sum_{j < k} a_j \phi_j \leqslant c$.
\item $\sum_{j < k} a_j \phi_j \geqslant c$ $\Rightarrow$ $\sum_{j < k} a_j \phi_j > d$ (for $c > d$).
\end{enumerate} \end{definition}
A deduction is a sequence of formulas $(\alpha_0 , ... , \alpha_{n-1})$ such that each $\alpha_i$ is either an instance of the axioms of our deductive system or follows from one or more formulas of $\left\{ \alpha_0, ... , \alpha_{i-1} \right\}$ by one of its rules. We say that $\alpha$ is provable, in symbols $\vdash \alpha$, if there is a deduction $(\alpha_0 , ... , \alpha_{n-1})$ with $\alpha = \alpha_{n-1}$. We say that $\alpha$ is consistent if $\nvdash \alpha \rightarrow \bot$ and inconsistent otherwise.
\begin{theorem}[Completeness]\label{compl_PTL} Let $\alpha$ be a formula of $\mathrm{PTL}$. Then
\[ \vdash \alpha \;\; \Leftrightarrow \;\; \models \alpha. \] \end{theorem}
\begin{proof} It follows from Theorem~2.2 and Theorem~2.4 of \cite{fagin}, by noticing that the {\em small model} of Theorem~2.4 can be taken to be uniform by adding points to the sample space.
\end{proof}
In consequence, even the ``false" Bell's Inequalities, such as (\ref{fs}) above that correspond to phenomena that can be violated by quantum mechanical computations as well as by actual experiments, are provable in $\mathrm{PTL}$. Thus $\mathrm{PTL}$ is not the ``right" logic for arguing about probabilities in quantum physics. In the next section a better candidate is introduced.
\section{Quantum team logic}
In this section we generalize $\mathrm{PTL}$ to a more expressive logic: quantum team logic ($\mathrm{QTL}$). The syntax of this logic is more complicated than that of $\mathrm{PTL}$. This modification is necessary in order to account for the fine structure of quantum teams and prove a completeness theorem. Instead of atomic formulas of the form
$$a_0 \phi_0+\ldots +a_{k-1} \phi_{k-1}\geqslant c,$$ as in $\mathrm{PTL}$, we adopt atomic formulas of the more complicated form
$$a_0 (\phi_0; V_0)+\ldots +a_{k-1} (\phi_{k-1}; V_{k-1}) \geqslant c$$ in order to capture the phenomenon, prevalent in quantum physics, that there are limitations as to what observables can be measured simultaneously.
\begin{definition} Suppose $(\phi_j)_{j\le k}$ are propositional formulas, $(a_j)_{j < k} \in \mathbb{Z}^k$, $c \in \mathbb{Z}$ and $(V_j)_{j < k}$ a sequence of finite sets of proposition symbols, so that the proposition symbols of $\phi_j$ are in $V_j$ for every $j < k$. Then
$$a_0 (\phi_0; V_0)+\ldots +a_{k-1} (\phi_{k-1}; V_{k-1}) \geqslant c$$ is an atomic formula of $\mathrm{QTL}$. \end{definition}
\begin{definition}\label{def_quantum_form} The set of formulas of $\mathrm{QTL}$ is defined as follows:
\begin{itemize}
\item atomic formulas are formulas;
\item if $\alpha$ is a formula, then $\neg\alpha$ is a formula;
\item if $\alpha$ and $\beta$ are formulas, then $\alpha \wedge \beta$ is a formula.
\end{itemize} \end{definition}
Also in this case we shall use some abbreviations, such as $(\phi; V) - (\psi; V) \geqslant c$ for $(\phi; V) + (-1)(\psi;V) \geqslant c$, $(\phi \geqslant \psi; V)$ for $(\phi; V) - (\psi; V) \geqslant 0$, $(\phi; V) \leqslant c$ for $-(\phi; V) \geqslant -c$, $(\phi; V) < c$ for $\neg ((\phi; V) \geqslant c)$, $(\phi; V) = c$ for $((\phi; V) \geqslant c) \wedge ((\phi; V) \leqslant c)$ and $(\phi = \psi; V)$ for $((\phi \geqslant \psi; V)) \wedge ((\phi \leqslant \psi; V))$. Furthermore, if in a formula $\sum_{j < k} a_j (\phi_j; V_j) \geqslant c$ we have that $V_0 = \cdots = V_{k-1}$, we simply write $(\sum_{j < k} a_j \phi_j; V_0) \geqslant c$. As for $\mathrm{PTL}$, we will allow rational numbers in our formulas as abbreviations for the formula that would be obtained by clearing the denominators.
\begin{definition}[Elementary components] Let $\alpha$ be a formula of $\mathrm{QTL}$, we define the elementary components of $\alpha$, in symbols $\mathrm{EC}(\alpha)$, by induction on $\alpha$ in the following way:
\begin{enumerate}[i)]
\item $\mathrm{EC}(\sum_{j < k} a_j (\phi_j; V_j) \geqslant c) = \left\{ (\phi_j; V_j) \, | \, j < k \right\}$;
\item $\mathrm{EC}(\neg\alpha) = \mathrm{EC}(\alpha)$;
\item $\mathrm{EC}(\alpha \wedge \beta) = \mathrm{EC}(\alpha) \cup \mathrm{EC}(\beta)$.
\end{enumerate} \end{definition}
Given an elementary component $(\phi, V)$, we call $V$ the support of $(\phi, V)$. It makes sense to define this notion for any formula of $\mathrm{QTL}$.
\begin{definition}[Support] Let $\alpha$ be a formula of $\mathrm{QTL}$, we define the support of $\alpha$, in symbols $\mathrm{Sp}(\alpha)$, by induction on $\alpha$ in the following way:
\begin{enumerate}[i)]
\item $\mathrm{Sp}(\sum_{j < k} a_j (\phi_j; V_j) \geqslant c) = \left\{ V_j \, | \, j < k \right\}$;
\item $\mathrm{Sp}(\neg\alpha) = \mathrm{Sp}(\alpha)$;
\item $\mathrm{Sp}(\alpha \wedge \beta) = \mathrm{Sp}(\alpha) \cup \mathrm{Sp}(\beta)$.
\end{enumerate} \end{definition}
We isolate two important classes of formulas of $\mathrm{QTL}$.
\begin{definition} Let $\alpha$ be a formula of $\mathrm{QTL}$ and $((\phi_j; V_j))_{j < k}$ an enumerataion of its elementary components.
\begin{enumerate}[i)]
\item We say that $\alpha$ is {\em classical} if $V_0 = \cdots = V_{k-1}$.
\item We say that $\alpha$ is {\em normal} if $V_j = \mathrm{Var}(\phi_j)$ for every $j < k$.
\end{enumerate} \end{definition}
Normal formulas will be denoted omitting supports. Thus, syntactically (not semantically) they look exactly like the formulas of $\mathrm{PTL}$, and will be denoted using the same conventions used there. Classical formulas convey the same semantic content as formulas of $\mathrm{PTL}$, from this their name.
Given a quantum team $X=(\Omega,\tau)$, we let
\[ \mathrm{Sp}(X) = \left\{ \mathrm{dom}_X(i) \, | \, i \in\Omega \right\},\]
where with $\mathrm{dom}_X(i)$ we mean the domain of the function $\tau(i)$. We call $\mathrm{Sp}(X)$ the support of $X$. Notice that for any $U \in \mathcal{P}_{\omega}(\omega)$ such that there is $V \in \mathrm{Sp}(X)$ with $U \subseteq V$ we have that $\Omega_V = \left\{ i \in \Omega \, | \, U \subseteq \mathrm{dom}_X(i) \right\} \neq \emptyset$.
\begin{definition}[Semantics] Let $\alpha$ be a formula of $\mathrm{QTL}$ and $X$ a quantum team with $\mathrm{Sp}(\alpha) \leqslant \mathrm{Sp}(X)$. We define by induction on $\alpha$ the relation $X \models \alpha$ in the following way:
\begin{itemize}
\item $X \models \sum_{j < k} a_j (\phi_j; V_j) \geqslant c$ iff $\sum_{j < k} a_j [\phi_j]_{X,{V_j}} \geqslant c$;
\item $X \models \neg\alpha$ iff $X \not\models \alpha$;
\item $X \models \alpha \wedge \beta$ iff $X \models \alpha$ and $X \models \beta$.
\end{itemize} \end{definition}
We say that $\alpha$ is satisfiable if there is a quantum team $X$ with $\mathrm{Sp}(\alpha) \leqslant \mathrm{Sp}(X)$ such that $X \models \alpha$, and that $\alpha$ is valid, in symbols $\models \alpha$, if $X \models \alpha$ for every quantum team $X$ with $\mathrm{Sp}(\alpha) \leqslant \mathrm{Sp}(X)$.
As evident, with respect to normal formulas the only difference between $\mathrm{PTL}$ and $\mathrm{QTL}$ is that the set of teams with respect to which we define the semantics for $\mathrm{QTL}$ is wider than that used for $\mathrm{PTL}$ (remember that multi-teams are particular cases of quantum teams). This allows for the modeling of non-classical phenomena. Notice indeed that in the case of $\mathrm{QTL}$, for $\phi_0, \phi_1, \phi_2$ and $\phi_3$ as in Section~\ref{prob_team} we have that the formula
\begin{equation}\label{tag1} \sum_{j < 4}\phi_j \leqslant 3 \end{equation}
is {\em not} a validity of $\mathrm{QTL}$, because the formula
\[ \sum_{j < 4}\phi_j \geqslant 3 + \frac{1}{4} \] is satisfied by the team represented in Figure~\ref{team2}. As a matter of facts, an even stronger negation of (\ref{tag1}) is satisfiable, namely
\begin{equation}\label{tag2} \sum_{j < 4}\phi_j = 4, \end{equation} because the team from which the Popescu-Rohrlich box arises satisfies (\ref{tag2}).
The fact that these formulas are consistent should be no mystery, as indeed fundamental laws of probability presuppose the kinds of classical structures that one does not find in $\mathrm{QTL}$, as the remark below shows.
\begin{remark}\label{failure_add} Let $X$ be the quantum team represented in Table~\ref{team3} and $\alpha$ the following formula:
\[ p_0 \wedge p_1 + p_0 \wedge \neg p_1 = p_0.\] Then $X \not\models \alpha$ because
\[ [p_0 \wedge p_1]_X + [p_0 \wedge \neg p_1]_X = \frac{1}{2} + \frac{1}{2} \neq \frac{1}{2} = [p_0]_X. \] \begin{table}[h]
$$\begin{array}{|c|c|c|c|}
\hline
\phantom{a} & p_0 & p_1 & p_3\\
\hline
0 & 1 & 1 & - \\
1 & 1 & 0 & - \\
2 & 0 & - & 1 \\
3 & 0 & - & 0 \\
\hline \end{array} $$\caption{Counterexample to additivity \label{team3}}\end{table} \end{remark}
Remark~\ref{failure_add} also shows that the deductive system described in Definition~\ref{ded_system1} is {\em not} sound with respect to the quantum team semantics given in the present section, because the additivity axiom (rule E)) is not respected. The following remark shows that also rule F) is violated.
\begin{remark}\label{failure_ruleE} Let $X$ be the quantum team represented in Table~\ref{team4} and let
\[ \phi = (p_0 \vee \neg p_0) \wedge p_1 \text{ and } \psi = (p_2 \vee \neg p_2) \wedge p_1.\]
Then clearly $\phi \equiv \psi$ (in propositional logic) but $X \not\models \phi = \psi$ because
\[ [\phi]_X = \frac{1}{2} \neq 0 = [\psi]_X. \]
\begin{table}[h]
$$\begin{array}{|c|c|c|c|}
\hline
\phantom{a} & p_0 & p_1 & p_2 \\
\hline
0 & 1 & 1 & - \\
1 & 1 & 0 & - \\
2 & - & 0 & 0 \\
3 & - & 0 & 0 \\
\hline \end{array} $$\caption{Counterexample to rule F) \label{team4}}\end{table} \end{remark}
Notice that the teams represented in Remarks~\ref{failure_add} and~\ref{failure_ruleE} are compatible with the thought experiment described in the introduction. As indeed, if we think of $p_0$ and $p_2$ to be the outcome of Alice's measurements, and $p_1$ and $p_3$ to be the outcome of Bob's measurements (for some choice of angles), then the presence of indeterminates\footnote{Remember the definition of indeterminate that we gave after Definition~\ref{quantum_team}. Indeterminates are just entries of the matrix representing the team that are not defined in some rows but that are defined in some others.} in the teams is compatible with the predictions of quantum mechanics (i.e. we can not measure the spins of the same particle at two different angles).
We now come to the deductive system of $\mathrm{QTL}$. At first sight, this system may look a little technical, but it expresses exactly what happens on the semantic side of $\mathrm{QTL}$ (and in fact we will show that it is complete). The system should be thought as a family of localizations of the deductive system of $\mathrm{PTL}$.
\begin{definition}[Deductive system]\label{ded_system2} The deductive system of $\mathrm{QTL}$ is parametrized by finite subsets of $\mathcal{P}_{\omega}(\omega)$. For any $\mathcal{V} \subseteq_{\mathrm{fin}} \mathcal{P}_{\omega}(\omega)$ it breaks into the following four sets of $\mathcal{V}$-rules, where each one of the formulas involved is such that $\mathrm{Sp}(\alpha) \leqslant \mathcal{V}$.
\[ \text{Propositional reasoning}\]
\begin{enumerate}[A)]
\item $\vdash_{\mathcal{V}} \alpha$, for $\alpha$ a propositional tautology.
\item If $\vdash_{\mathcal{V}} \alpha \rightarrow \beta$ and $\vdash_{\mathcal{V}} \alpha$, then $\vdash_{\mathcal{V}} \beta$ (modus ponens). \end{enumerate}
\[ \text{Probabilistic reasoning}\]
\begin{enumerate}[A)]\setcounter{enumi}{2}
\item $\vdash_{\mathcal{V}}(\phi; V) \geqslant 0$.
\item $\vdash_{\mathcal{V}}(\phi \vee \neg \phi; V) = 1$.
\item $\vdash_{\mathcal{V}}(\phi \wedge \psi + \phi \wedge \neg \psi = \phi; V)$ (additivity).
\item If $\phi \equiv \psi$ in propositional logic, then $\vdash_{\mathcal{V}} (\phi = \psi; V)$.
\end{enumerate}
\[ \text{Linear inequalities}\]
\begin{enumerate}[A)]\setcounter{enumi}{6}
\item $\vdash_{\mathcal{V}} (\phi \geqslant \phi; V)$.
\item $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) \geqslant c$ $\Leftrightarrow$ $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) + 0 (\psi; V) \geqslant c$.
\item $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) \geqslant c$ $\Leftrightarrow$ $\vdash_{\mathcal{V}} \sum_{j < k} a_{\sigma(j)} (\phi_{\sigma(j)}; V_{\sigma(j)}) \geqslant c$ (for $\sigma$ permutation on $k$).
\item $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) \geqslant c \wedge \sum_{j < k} b_j (\phi_j; V_j) \geqslant d$ $\Leftrightarrow$ $\vdash_{\mathcal{V}} \sum_{j < k} (a_j + b_j) (\phi_j; V_j) \geqslant c + d$.
\item $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) \geqslant c$ $\Leftrightarrow$ $\vdash_{\mathcal{V}} \sum_{j < k} da_j (\phi_j; V_j) \geqslant c$ (for $d >0$).
\item $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) \geqslant c \vee \sum_{j < k} a_j (\phi_j; V_j) \leqslant c$.
\item $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) \geqslant c$ $\Rightarrow$ $\vdash_{\mathcal{V}} \sum_{j < k} a_j (\phi_j; V_j) > d$ (for $c > d$).
\end{enumerate}
\[ \text{Change of support}\]
\begin{enumerate}[A)]\setcounter{enumi}{13}
\item $\vdash_{\mathcal{V}} (\phi, V) = 0$ and $V \subseteq V' \in \mathcal{V}$ $\Rightarrow$ $(\phi, V') = 0$.
\item $\vdash_{\mathcal{V}} (\phi, V) = 1$ and $V \subseteq V' \in \mathcal{V}$ $\Rightarrow$ $(\phi, V') = 1$.
\end{enumerate} \end{definition}
Let $\mathcal{V} \subseteq_{\mathrm{fin}} \mathcal{P}_{\omega}(\omega)$. A $\mathcal{V}$-deduction is a sequence of formulas $(\alpha_0 , ... , \alpha_{n-1})$ such that $\mathrm{Sp}(\alpha_i) \leqslant \mathcal{V}$ for every $i < n$ and $\alpha_i$ is either an instance of $\mathcal{V}$-axioms of our deductive system or follows from one or more formulas of $\left\{ \alpha_0, ... , \alpha_{i-1} \right\}$ by one of its $\mathcal{V}$-rules. We say that $\alpha$ is provable, in symbols $\vdash \alpha$, if there is an $\mathrm{Sp}(\alpha)$-deduction $(\alpha_0 , ... , \alpha_{n-1})$ with $\alpha = \alpha_{n-1}$. We say that $\alpha$ is consistent if $\nvdash \alpha \rightarrow \bot$ and inconsistent otherwise.
Before analyzing the problem of completeness of $\mathrm{QTL}$ we notice that axioms G) - M) axiomatize the set of valid inequality formulas. We make this point clear. Based on \cite{fagin}, we define a logical system for linear inequalities, which we call $\mathrm{LinIneq}$. Let $\mathrm{IndVar} = \left\{ v_i \, | \, i \in \omega \right\}$ be a countable set, called the set of individual variables.
\begin{definition} Let $k \in \omega^*$, $(a_j)_{j < k} \in \mathbb{Z}^k$, $c \in \mathbb{Z}$ and $(x_j)_{j<k} \in \mathrm{IndVar}^k$. Then $\sum_{j < k} a_j x_j \geqslant c$ is an atomic formula of $\mathrm{LinIneq}$. The formulas of $\mathrm{LinIneq}$ are boolean combinations of atomic formulas of $\mathrm{LinIneq}$.
\end{definition}
\begin{definition} Let $f(\vec{x})$ be a formula of $\mathrm{LinIneq}$ with variables from $\vec{x} = (x_0, ..., x_{n-1})$ and $A: \vec{x} \rightarrow \mathbb{R}$. We define by induction on $f$ the relation $A \models f$ in the following way:
\begin{enumerate}[i)]
\item $A \models \sum_{j < k} a_j x_j \geqslant c$ iff $\sum_{j < k} a_j A(x_j) \geqslant c$;
\item $A \models \neg f$ iff $X \not\models f$;
\item $A \models f \wedge g$ iff $X \models f$ and $X \models g$.
\end{enumerate} \end{definition}
\begin{definition}\label{ded_system3} The deductive system of $\mathrm{LinIneq}$ breaks into the two following sets of rules.
\[ \text{Propositional reasoning}\]
\begin{enumerate}[a)]
\item All instances of propositional tautologies.
\item If $f \rightarrow g$ and $f$, then $g$ (modus ponens).
\end{enumerate}
\[ \text{Linear inequalities}\]
\begin{enumerate}[a)]\setcounter{enumi}{2}
\item $x \geqslant x$.
\item $\sum_{j < k} a_j x_j \geqslant c$ $\Leftrightarrow$ $\sum_{j < k} a_j x_j + 0y \geqslant c$.
\item $\sum_{j < k} a_j x_j \geqslant c$ $\Leftrightarrow$ $\sum_{j < k} a_{\sigma(j)} (x_{\sigma(j)} \geqslant c$ (for $\sigma$ permutation on $k$).
\item $\sum_{j < k} a_j x_j \geqslant c \wedge \sum_{j < k} b_j x_j \geqslant d$ $\Leftrightarrow$ $\sum_{j < k} (a_j + b_j) x_j \geqslant c + d$.
\item $\sum_{j < k} a_j x_j \geqslant c$ $\Leftrightarrow$ $\sum_{j < k} da_j x_j \geqslant c$ (for $d >0$).
\item $\sum_{j < k} a_j x_j \geqslant c \vee \sum_{j < k} a_j x_j \leqslant c$.
\item $\sum_{j < k} a_j x_j \geqslant c$ $\Rightarrow$ $\sum_{j < k} a_j x_j > d$ (for $c > d$).
\end{enumerate} \end{definition}
\begin{lemma} Let $f$ be a formula of $\mathrm{LinIneq}$, then
\[ \vdash f \;\; \Leftrightarrow \;\; \models f. \] \end{lemma}
\begin{proof} See~\cite[Theorem~4.3]{fagin}.
\end{proof}
Given a formula $f(\vec{x})$ of $\mathrm{LinIneq}$, we say that $f$ has a {\em rational solution} if there is $A: \vec{x} \rightarrow \mathbb{R}$ such that $\mathrm{ran}(A) \subseteq \mathbb{Q}$ (i.e. the set of rational numbers).
\begin{lemma}\label{axiomatization_lin_ineq} Let $f$ be a formula of $\mathrm{LinIneq}$. If $f$ is consistent (in $\mathrm{LinIneq}$), then $f$ has a rational solution.
\end{lemma}
\begin{proof} See~\cite[Theorem~4.9]{fagin}. \end{proof}
We now come back to the problem of completeness of $\mathrm{QTL}$.
\begin{theorem}[Completeness]\label{compl_QTL} Let $\alpha$ be a formula of $\mathrm{QTL}$. Then
\[ \vdash \alpha \;\; \Leftrightarrow \;\; \models \alpha. \] \end{theorem}
\begin{proof} Soundness is easy. Regarding completeness, we show that every consistent formula is satisfiable. Let then $\alpha$ be a consistent formula and $\mathrm{Sp}(\alpha) = \mathcal{V}$. Given $V \in \mathcal{P}_{\omega}({\omega})$ and $s \in 2^V$, we let $\phi_s = \bigwedge_{v \in V} p_{v}^{s(v)}$. Let
\[ \beta_{\mathcal{V}}^0 = \bigwedge_{\substack{ V, V' \in \, \mathcal{V} \\ V \subseteq V'}} (\bigwedge_{s \in 2^V}( (\phi_s = 0; V) \rightarrow (\phi_s = 0; V'))),\] and
\[ \beta_{\mathcal{V}}^1 = \bigwedge_{\substack{ V, V' \in \, \mathcal{V} \\ V \subseteq V'}} (\bigwedge_{s \in 2^V}( (\phi_s = 1; V) \rightarrow (\phi_s = 1; V'))).\]
Define $\beta_{\mathcal{V}} = \beta_{\mathcal{V}}^0 \wedge \beta_{\mathcal{V}}^1$. Notice that because of rules N) and O) we have that $\beta_{\mathcal{V}}$ is provable. Let also
\[ \gamma_{\alpha}^0 = (\bigwedge_{V \in \mathcal{V}} (\sum_{s \in 2^{V}} (\phi_{s}; V) = 1)) \wedge (\bigwedge_{V \in \mathcal{V}} (\bigwedge_{s \in 2^V} ((\phi_{s}; V) \geqslant 0))), \]
and
\[ \gamma_{\alpha}^1 = \bigwedge_{(\phi; V) \in \mathrm{EC}(\alpha)} ((\phi; V) = \sum_{\substack{s \in 2^{\mathrm{Var(\phi)}} \\ \!\!s \models \phi}}(\phi_s; V)).\]
Define $\gamma_{\alpha} = \gamma_{\alpha}^0 \wedge \gamma_{\alpha}^1$. Notice that also $\gamma_{\alpha}$ is provable, this is because of rule C) and the following lemma.
\begin{lemma} Let $\phi$ be a propositional formulas and $V \in \mathcal{P}_{\omega}(\omega)$ with $\mathrm{Var}(\phi) \subseteq V$. Then the formula
\begin{equation}\label{starstarstar}
(\phi; V) = \sum_{\substack{s \in 2^{\mathrm{Var(\phi)}} \\ \!\!s \models \phi}}(\phi_s; V) \end{equation} is provable in $\mathrm{QTL}$\footnote{Notice that the provability of the first conjunct in $\gamma_{\alpha}^0$ follows from this by taking $\phi = (\bigwedge_{v \in V} p_v) \vee \neg (\bigwedge_{v \in V} p_v)$. In fact from axiom D) and (\ref{starstarstar}) we have that \[ 1 = ((\bigwedge_{v \in V} p_v) \vee \neg (\bigwedge_{v \in V} p_v)); V) = \sum_{\substack{s \in 2^{\mathrm{Var(\phi)}} \\ \!\!s \models \phi}}(\phi_s; V) = \sum_{s \in 2^{V}} (\phi_{s}; V). \] }. \end{lemma}
\begin{claimproof} It follows from the fact that without the support the formula is provable in $\mathrm{PTL}$, for details see \cite[Lemma~2.3.]{fagin}. \end{claimproof}
Let now $\delta = \alpha \wedge \beta_{\mathcal{V}} \wedge \gamma_{\alpha}$. This formula is consistent, because $\alpha$ is consistent by hypothesis, $\beta_{\mathcal{V}}$ and $\gamma_{\alpha}$ are provable, and $\mathrm{Sp}(\beta_{\mathcal{V}}), \mathrm{Sp}(\gamma_{\alpha}) \subseteq \mathcal{V}$. Let $\left\{ \delta_i \, | \, i < l\right\}$ be the set of atoms occurring in $\delta$. Thinking of $\delta$ as a propositional formula in the propositional variables $(\delta_i)_{i < l}$, it is clear that the formula
\[ \delta \leftrightarrow \bigvee_{\substack{S \in 2^l \\ \!\!S \models \delta}} (\bigwedge_{i < l} \delta_i^{S(i)})\]
is provable in $\mathrm{QTL}$, because $\mathrm{QTL}$ has all the validities of propositional logic in its deductive system. Thus, from the consistency of $\delta$ we can infer the existence of an assignment $S: \left\{ \delta_i \, | \, i < l\right\} \rightarrow 2$ such that $\bigwedge_{i < l}\delta_i^{S(i)}$ is consistent. Let $\delta^{*} = \bigwedge_{i < l}\delta_i^{S(i)}$, for $S$ such an assignment. We show that there is a quantum team $X$ such that $X \models \delta^{*}$. This suffices to establish the satisfiability of $\delta$ in $\mathrm{QTL}$ and thus of $\alpha$.
Let $\left\{ x_i \, | \, i < m \right\}$ be the set of elementary components of $\delta^{*}$. Thinking of $\delta^{*}$ as system of linear inequalities in the individual variables $(x_i)_{i < m}$, we have that $\delta^{*}$ is a formula of $\mathrm{LinIneq}$. Because of axioms G) - M), given that $\delta^{*}$ is consistent in $\mathrm{QTL}$ we must have that $\delta^{*}$ is consistent in $\mathrm{LinIneq}$. Thus, by Lemma~\ref{axiomatization_lin_ineq}, we can infer that $\delta^{*}$ has a rational solution.
Let $(q_i)_{i < e}$ be a rational solution of $\delta^{*}$ (thought as a system of linear inequalities). For any $V \in \mathcal{V}$ we build a multi-team $X(V)$ with domain $V$ following the information encoded in $(q_i)_{i < e}$. Let $V \in \mathcal{V}$, $(s_i)_{i < h}$ an enumeration of the truth assignments to proposition symbols in $V$, wehnever the rational number corresponding to the component $(\phi_s, V)$ is different from $0$, and $(q_{k_i})_{i < h}$ an enumeration of the rational numbers corresponding to $(s_i)_{i < h}$. Notice that because of $\gamma_{\alpha}^0$ the sequence $(q_{k_i})_{i < h}$ can not be empty, all the elements of the sequence are positive and $\sum_{i < h} q_{k_i} = 1$. Let $t \in \omega^*$ be such that $\frac{a_i}{t} = q_{k_i}$ for every $i < h$. We define $X(V)=(\Omega,\tau)$, where $\Omega=t$ and $\tau: t \rightarrow 2^V$ is defined by
\[ \tau(z) = \begin{cases} s_0 &\mbox{if } z < a_0 \\
s_1 &\mbox{if } a_0 \leqslant z < (a_0 + a_1) - 1 \\
... &\mbox{if } ... \\
s_{h-1} &\mbox{if } \sum_{i < h-1} a_i \leqslant z < t. \end{cases} \] Notice that for every $(\phi, V) \in \mathrm{EC}(\alpha)$ we have that
\[ [\phi]_{X(V),V} = [\phi]_{X(V)} = q, \] where $q$ is the rational number corresponding to the elementary component $(\phi; V)$. This is because of $\gamma_{\alpha}^1$ and the fact that every $X(V)$ is a multi-team.
We now linearly order $\mathcal{V}$ satisfying the requirement that if $V \subsetneq V'$ then $V' < V$. Let $(V_0, ..., V_{d-1})$ be the enumeration of $\mathcal{V}$ that follows this order. By induction on $d$, we define quantum teams $(X^i)_{i < d}$, $X^i=(\Omega^i,\tau^i)$, such that for every $j \leqslant i < d$ and $(\phi, V_{j}) \in \mathrm{EC}(\delta^*)$ we have that
\begin{equation}\label{tag1s}
(\Omega^i)_{V_{j}} = \left\{ z \in \Omega^i \, | \, V_{j} \subseteq \mathrm{dom}_{X^{i}}(z) \right\} \neq \emptyset, \end{equation}
\begin{equation}\label{tag2s} [\phi]_{X^i,{V_{j}}} = [\phi]_{X(V_{j})}.\end{equation}
Clearly $X^{d-1}$ will be such that $X^{d-1} \models \delta^{*}$. \newline {\bf Base case).} $X_0 = X(V_0)$. Notice that requirements (\ref{tag1s}) and (\ref{tag2s}) are trivially satisfied. \newline {\bf Inductive case).} Suppose we have defined $X^{i}$. We are going to define $X^{i+1}$ by gluing $X(V_{i+1})$ to $X^{i}$ without altering probabilities. Let $p$ and $m$ be the number of lines in $X(V_{i+1})$ and $X^{i}$, respectively. There are two cases.
\noindent{\textit{Case 1).}} There is no $V \in \mathcal{V}$ such that $V_{i+1} \subsetneq V$. Let $X^{i+1}$ be the team obtained extending $X^i$ with $p$ many lines with domain $V_{i+1}$ and assigning functions in $2^{V_{i+1}}$ according to the values appearing in the rows of $X(V_{i+1})$. By the fact that we extend $X^i$ (and in particular we remove none of the rows of $X^i$), and the fact that we add a strictly positive number of rows with domain $V_{i+1}$ we have that (\ref{tag1s}) is satisfied. Furthermore, for every $s \in 2^V$ we have that
\[ [\phi_s]_{X^{i+1},{V_{i+1}}} = [\phi_s]_{X(V_{i+1})},\] and the probabilities of the other supports remain unaltered, and so (\ref{tag2s}) is also satisfied.
\noindent{\textit{Case 2).}} There is at least one $V \in \mathcal{V}$ such that $V_{i+1} \subsetneq V$. Let now $m_{V_{i+1}} = \left\{ j < m \, | \, V_{i+1} \subseteq \mathrm{dom}_{X^{i}}(j) \right\}$ and $|m_{V_{i+1}}| = k$, i.e. the number of lines in $X^{i}$ where the support ${V_{i+1}}$ is defined. Notice that $k > 0$, because for any $V \in \mathcal{V}$ such that $V_{i+1} \subseteq V$ we have that $k \geqslant |m_V|$, and $m_V \neq \emptyset$ by inductive hypothesis. We extend the quantum team $X^{i}$ with $k(p-1)$ lines with domain $V_{i+1}$\footnote{Of course it is possible that $p = 1$, and so $k(p-1) = 0$. A moment reflection shows that this is not a problem.}. For $s \in 2^{V_{i+1}}$, let
\[ [\phi_s]_{X^{i},{V_{i+1}}} = \frac{b_s}{k} \;\; \text{ and } \;\; [\phi_s]_{X(V_{i+1})} = \frac{a_s}{p}. \] For every $s \in 2^V$, we let the assignment $s$ appear in $a_sk-b_s$ many of the new lines.
\begin{claim} We claim that this works, namely:
\begin{enumerate}[i)]
\item $0 \leqslant a_sk-b_s \leqslant k(p-1)$;
\item $\sum_{s \in 2^V} (a_sk-b_s) = k(p-1)$.
\end{enumerate} \end{claim}
\begin{claimproof} Item ii) is easy, because
\[ \begin{array}{rcl}
\sum_{s \in 2^V} (a_sk-b_s) & = & \sum_{s \in 2^V} a_sk - \sum_{s \in 2^V} b_s\\
& = & k\sum_{s \in 2^V} a_s - \sum_{s \in 2^V} b_s\\
& = & kp - k\\
& = & k(p-1). \end{array} \] We verify i). Let $s \in 2^V$, we distinguish three cases.
\noindent{\em Case A)} $a_s = 0$. If $a_s = 0$, then $[\phi_s]_{X(V_{i+1})} = 0$, and so the two conjuncts expressing the formula $(\phi_s; V_{i+1}) = 0$ occur in $\delta^{*}$. But then, because of $\beta_{\mathcal{V}}^0$, for every $V \in \mathcal{V}$ such that $V_{i+1} \subsetneq V$ also the two conjuncts expressing the formula $(\phi_s; V) = 0$ occurs in $\delta^{*}$, and so $[\phi_s]_{X(V)} = 0$. Thus, by induction hypothesis, for any such $V$ we have that $[\phi_s]_{X^{i},{V}} = [\phi_s]_{X(V)} = 0$. Hence, $[\phi_s]_{X^{i},{V_{i+1}}} = 0$ because
\[m_{V_{i+1}} = \left\{ j < m \, | \, V_{i+1} \subseteq \mathrm{dom}_{X^{i}}(j) \right\} = \bigcup_{\substack{V \in \mathcal{V} \\ V_{i+1} \subsetneq V}} (\left\{ j < m \, | \, V \subseteq \mathrm{dom}_{X^{i}}(j) \right\}),\] and so
\[ \left\{ j \in m_{V_{i+1}} \, | \, X^{i}(j)(\phi_s) = 1 \right\} = \bigcup_{\substack{V \in \mathcal{V} \\ V_{i+1} \subsetneq V}} (\left\{ j \in m_{V} \, | \, X^{i}(j)(\phi_s) = 1 \right\}) = \emptyset.\] From which it follows that $a_sk-b_s = 0$.
\noindent {\em Case B)} $a_s = p$. If $a_s = p$, then $[\phi_s]_{X(V_{i+1})} = 1$, and so the two conjuncts expressing the formula $(\phi_s; V_{i+1}) = 1$ occur in $\delta^{*}$. Thus, reasoning as in the case above we see that because of $\beta_{\mathcal{V}}^1$ we must have that $b_s = k$. From which it follows that $a_sk-b_s = k(p-1)$.
\noindent {\em Case C)} $ 0< a_s < p$. Simply notice that
\[ \begin{array}{rcl}
a_s < p & \Rightarrow & a_s + 1 \leqslant p \\
& \Rightarrow & ka_s + k \leqslant kp \\
& \Rightarrow & ka_s \leqslant k(p-1) \\
& \Rightarrow & ka_s - b_s \leqslant k(p-1), \end{array} \] and
\[ \begin{array}{rcl}
0 < a_s & \Rightarrow & 1 \leqslant a_s \\
& \Rightarrow & k \leqslant ka_s \\
& \Rightarrow & 0 \leqslant k -b_s \leqslant ka_s - b_s. \end{array} \] \end{claimproof}
Let $X^{i+1}$ be the quantum team resulting from the process described above. Requirement (\ref{tag1s}) is satisfied because also in this case we extend $X^i$, and already in $X^{i}$ there are $k > 0$ lines where the support $V_{i+1}$ is defined. Furthermore, for any $s \in 2^V$ we have that
\[ [\phi_s]_{X^{i+1},{V_{i+1}}} = \frac{b_s + a_sk - b_s}{k + k(p-1)} = \frac{a_s}{p} = [\phi_s]_{X(V_{i+1})},\] and the probabilities of the other supports remain unaltered, and so (\ref{tag2s}) is also satisfied. This concludes the proof of the theorem.
\end{proof}
Because the size of the team used in the above theorem can be computed from the formula $\alpha$, we get the following corollary:
\begin{corollary} The logic $\mathrm{QTL}$ is decidable. \end{corollary}
\begin{remark}
In his famous {\em Lectures on Physics} \cite{MR0213079} Richard Feynman explains to the students the ``Double-Slit Experiment": Electrons are accelerated toward a thin metal plate with two holes (hole 1 and hole 2) in it. Beyond the wall is another plate with a movable detector in front of it so that we can record an empirical probability distribution for the spot where the particle hits the second plate. It turns out that even when particles are sent one by one, the distribution has interference as if the particles went through both holes in a wave-like fashion. Feynman proposes the question whether the following proposition is true or false:
\begin{quote}
Proposition A: Each electron either goes through hole 1 or it goes through hole 2. \end{quote}
\noindent To test this proposition, Feynman puts in this hypothetical experiment a light source near each hole so that we can observe an electron passing through that hole from the flash it gives as it scatters the light. It turns out that the probability distribution of the spot where the particle hits the second plate has now no interference as if each particle indeed went through hole 1 or through hole 2. Feynman comments this as follows:
\begin{quote} ``Well," you say, ``what about Proposition A? Is it true, or is it not true, that the electron either goes through hole 1 or it goes through hole 2?" \end{quote}
\noindent We use quantum team logic to model this riddle. Let us adopt the following notation for the basic propositions of this situation:
\begin{itemize} \item $p_0$ = "we see the flash at hole 1". \item $p_1$ = "we see the flash at hole 2". \item $q_i $ = "the detector got the electron at distance $i$ from the center", for $i \in \mathbb{Z}$. \end{itemize}
Let $\phi$ be the sentence $(p_0 \vee p_1) \wedge \neg (p_0 \wedge p_1)$. This sentence is Feynman's Proposition A. It is easy to construct a quantum team $X$ so that:
\begin{enumerate} \item $X \models \phi = 1$. \item $X \models (\phi\wedge q_i) \neq q_i.$ \end{enumerate}
\noindent Thus in this quantum team, even though the probability of $\phi$ is $1$, the probability of $\phi\wedge q_i$ is different from the probability of $q_i$. Although this defies intuition about probabilities, in quantum team logic it is just a feature, not a paradox.
\end{remark}
\begin{remark} We have defined the semantics of our quantum team logic with reference to arbitrary quantum teams, whether they arise from actual quantum mechanical considerations or not. As the maximal violations of Bell's Inequality show, there are quantum teams that do not correspond to any actual experiments in quantum mechanics. This raises the following question:
\begin{quote}{\bf Open Question:} Can we axiomatize completely the formulas of quantum team logic that are valid in quantum teams that correspond to quantum mechanical experiments? In particular, is that set of formulas recursive? \end{quote} \end{remark}
\section{Conclusion}
We introduced two new families of teams: multi-teams and quantum teams. The first family of teams models a notion of experiment which is compatible with classical mechanics but does not account for the predictions of quantum mechanics and experimental verifications thereof. The second family of teams is wider than the first and accounts for the non-locality phenomena which are typical of quantum mechanics. Based on these families of teams, we formulated two new logics: probabilistic team logic ($\mathrm{PTL}$) and quantum team logic $(\mathrm{QTL})$. $\mathrm{PTL}$ is only an adaptation of the system presented in \cite{fagin} to the framework of team semantics, while $\mathrm{QTL}$ is an original system, which we think appropriate for a logical analysis of the thought experiments considered in the foundations of quantum physics and the relative probability tables. The language of $\mathrm{QTL}$ is built up from rational inequalities, and the non-classical nature of quantum teams allows for the satisfiability of rational inequalities expressing violations of Bell's Inequalities. Finally, we devised a deductive system for $\mathrm{QTL}$ and showed that this system is complete with respect to the intended semantics, making the logical treatment of the subject complete.
\end{document} | arXiv |
algebraic geometry for specific fields means no axiom of choice?
I'm beginning to study commutative algebra and algebraic geometry, and something confuses me.
Many proofs, e.g. of the Nullstellensatz, require the axiom of choice (to get the right results about ideals, I suppose). Is there any way to avoid that axiom, if you start with an algebraically closed field that is countable? What about the complex field?
I've been looking for something on this topic on the Internet, with no luck. Thanks for any help!
algebraic-geometry commutative-algebra axiom-of-choice
Yon TehYon Teh
$\begingroup$ If your field is countable, you can probably get away with most things. Do note, however, that it is consistent that $\Bbb Q$ has two non-isomorphic algebraic closures (but only one of them is countable). So even that's to be taken with a pinch of salt. $\endgroup$ – Asaf Karagila♦ Aug 3 '18 at 17:16
The axiom of choice is pretty much never necessary for doing any sort of commutative algebra or algebraic geometry which involves only finitely generated algebras over a field. In particular, any arguments that involve the existence of maximal ideals (or ideals which are maximal with a given property) can be carried out in these rings without invoking the axiom of choice, since they are Noetherian.
To be more precise, without assuming the axiom of choice, you can prove that any polynomial ring in finitely many variables over a field (and therefore also any quotient of such a ring) is Noetherian in the strong sense that any nonempty set of ideals contains a maximal element. The proof is a minor modification of the usual proof of the Hilbert basis theorem; you can find it written out in full detail at Existence of a prime ideal in an integral domain of finite type over a field without Axiom of Choice.
Eric WofseyEric Wofsey
$\begingroup$ Hrm... define Noetherian without choice. :-) $\endgroup$ – Asaf Karagila♦ Aug 3 '18 at 22:20
$\begingroup$ Right, there are multiple inequivalent definitions. That's why I clarified which definition I was using (namely, the strongest one). $\endgroup$ – Eric Wofsey Aug 3 '18 at 22:24
$\begingroup$ Right, yeah, I should have finished reading the answer before I posted the comment. :-P $\endgroup$ – Asaf Karagila♦ Aug 3 '18 at 22:24
Many applications of choice can be removed when we restrict attention to well-orderable - or even better, countable - fields and other objects. As Asaf says, however, there are nonetheless some facts which really do require the axiom of choice. At the same time, these often have slight weakenings that (a) don't require choice and (b) do everything we really need (e.g. that there is only one countable algebraic closure of $\mathbb{Q}$ up to isomorphism).
You might be concerned at this point about results in "concrete" mathematics which go through principles with no apparent choice-free proof. E.g. is it possible that the Riemann hypothesis could be proved, but only using choice due to its reliance on some "choicey" piece of algebraic geometry? It turns out that there is a very powerful metatheorem which says that the answer is "no:" Shoenfield absoluteness. In particular, this implies that if ZFC proves the Riemann hypothesis, or Goldbach, or Fermat's last theorem, or basically any fact from number theory or "low-level" algebraic geometry, then so does ZF. In fact it says much more: it says e.g. that we can also through arithmetic axioms like GCH onto the pile while still not changing the provability of the Riemann hypothesis. The motto I would give here is that Shoenfield absoluteness shows that number theory doesn't rely on choice.
Noah SchweberNoah Schweber
Not the answer you're looking for? Browse other questions tagged algebraic-geometry commutative-algebra axiom-of-choice or ask your own question.
Existence of a prime ideal in an integral domain of finite type over a field without Axiom of Choice
Hilbert's Nullstellensatz without Axiom of Choice
Noetherian condition on the ring of formal power series without Axiom of Choice
Noetherianess of a locally noetherian affine scheme without axiom of choice
Fields whose algebraic closure cannot be constructed without the axiom of choice
Prerequisite of Algebraic Geometry
What is the importance of modules in algebraic geometry?
Are there maximal ideals of $k[x_1, x_2, \ldots]$ (infinite indeterminates) that are not $(x_1 - a_1, \ldots…)$ for $k$ algebraically closed?
Universal algebraic geometry
What are examples of the use of countable choice axiom and DC in (pseudo)riemannian geometry results
Axiom of Countable Choice - Consequences | CommonCrawl |
Discussion Tag Cloud
2-categories diagrams graph graphs pasting planar schemes theory
Atrium > Mathematics, Physics & Philosophy: How essential is planarity for rigorous treatments of pasting diagrams and pasting schemes?
CommentAuthorPeter Heinig
CommentTimeJul 18th 2017
(edited Jul 18th 2017)
Author: Peter Heinig
Format: MarkdownItex[Reasons for starting a new thread: (0) This topic seems fundamental and complex enough to merit a thread of its own. (1) This topic seems be likely to be of lasting interest to others in the nLab. (2) The relevant threads that exist tend to be *LatestChanges* threads and so far, no change was meant on account of t*his* topic. ] Briefly: is planarity *only-sufficient* for a rigorous formalization of pasting schemes in 2-, 3- and 4-categories, or is there something more essential that I am missing, causing mathematicians to use plane graphs when doing so? In more detail: my understanding is that A. J. Power in "A 2-Categorical Pasting Theorem Journal of Algebra 129, 439-445 (1990), henceforth JAlg129, gave the first rigorous proof that any order in which one tries to evaluate a given finite acyclic plane pasting diagram evaluates to the same 2-cell. It indeed seems to be the case that (telling from what I studied of work of N. Gurski and others) for 2- and 3-categories, and even (telling from what I studied of work of T. Trimble and A. E. Hoffnung, and from in particular Trimble's diagrams hosted by J. Baez) for 4-categories, all axioms necessary to construct these structures *can* be expressed by "schemes" whose underlying graphs *happen* to be planar. But is there a precise sense because of which one can discount the possibility that one * might need/want pasting-scheme-equation-expressed-axioms whose underlying graphs are *non*planar? It seems to depend on the answer to this question whether one considers the formal definitions of "pasting diagram" and "pasting scheme", which are plane graphs with some additional structure added, as fundamental or merely manageable expedients sufficient to rigorously formalize *those*pasting-diagram-challenges that had been thrown down so far, so to speak. Another aspect is that some graph-theorists might disagree that Power's proof makes "heavy use of the techniques of Graph Theory" (JAlg129, abstract); the proof rather makes essential use of the *plane* graphs, i.e., is rather an application of planarity than of what is typically seen as graph theory. While "heavy use" is an overstatement in my opinion, this seems a nice example of *common ground* between category theory and graph theory. It apparently has not been made clear enough what is necessary for what. I did not yet look closely into the question how much of the planarity is *indispensable* for Power's proof to work out, and decided to ask first since this seems an obvious question and likely to have been asked answered before, but I do not find it. The *obvious* question is of course: is there a non-planar relevant counterexample in the literature? I have been searching around for quite some time now. It seems to me that, roughly speaking, one can *decide* to impose *additional non-planar axioms*, although one just happens not to need to do so in order to ensure coherence. So, do you think Power and Yetter just *happened to tame higher-composition restricted to the plane*, using the plane as a convenient frame in which to carry out the induction-proof, or am I missing something essential because of which one can rest assured that no *non-planar* "pasting diagrams" (the latter in an informal sense) will be needed? If not, the right formalization of pasting diagrams and nonambiguity of composition might perhaps not yet have been found.
[Reasons for starting a new thread:
(0) This topic seems fundamental and complex enough to merit a thread of its own.
(1) This topic seems be likely to be of lasting interest to others in the nLab.
(2) The relevant threads that exist tend to be LatestChanges threads and so far, no change was meant on account of this topic.
Briefly: is planarity only-sufficient for a rigorous formalization of pasting schemes in 2-, 3- and 4-categories, or is there something more essential that I am missing, causing mathematicians to use plane graphs when doing so?
In more detail: my understanding is that A. J. Power in "A 2-Categorical Pasting Theorem Journal of Algebra 129, 439-445 (1990), henceforth JAlg129, gave the first rigorous proof that any order in which one tries to evaluate a given finite acyclic plane pasting diagram evaluates to the same 2-cell.
It indeed seems to be the case that (telling from what I studied of work of N. Gurski and others) for 2- and 3-categories, and even (telling from what I studied of work of T. Trimble and A. E. Hoffnung, and from in particular Trimble's diagrams hosted by J. Baez) for 4-categories, all axioms necessary to construct these structures can be expressed by "schemes" whose underlying graphs happen to be planar.
But is there a precise sense because of which one can discount the possibility that one
might need/want pasting-scheme-equation-expressed-axioms whose underlying graphs are nonplanar?
It seems to depend on the answer to this question whether one considers the formal definitions of "pasting diagram" and "pasting scheme", which are plane graphs with some additional structure added, as fundamental or merely manageable expedients sufficient to rigorously formalize thosepasting-diagram-challenges that had been thrown down so far, so to speak.
Another aspect is that some graph-theorists might disagree that Power's proof makes "heavy use of the techniques of Graph Theory" (JAlg129, abstract); the proof rather makes essential use of the plane graphs, i.e., is rather an application of planarity than of what is typically seen as graph theory.
While "heavy use" is an overstatement in my opinion, this seems a nice example of common ground between category theory and graph theory. It apparently has not been made clear enough what is necessary for what.
I did not yet look closely into the question how much of the planarity is indispensable for Power's proof to work out, and decided to ask first since this seems an obvious question and likely to have been asked answered before, but I do not find it.
The obvious question is of course: is there a non-planar relevant counterexample in the literature? I have been searching around for quite some time now.
It seems to me that, roughly speaking, one can decide to impose additional non-planar axioms, although one just happens not to need to do so in order to ensure coherence.
So, do you think Power and Yetter just happened to tame higher-composition restricted to the plane, using the plane as a convenient frame in which to carry out the induction-proof, or am I missing something essential because of which one can rest assured that no non-planar "pasting diagrams" (the latter in an informal sense) will be needed?
If not, the right formalization of pasting diagrams and nonambiguity of composition might perhaps not yet have been found.
Format: MarkdownItex> (telling from what I studied of work of T. Trimble and A. E. Hoffnung, and from in particular Trimble's diagrams hosted by J. Baez) for 4-categories, all axioms necessary to construct these structures can be expressed by "schemes" whose underlying graphs happen to be planar. Er? Those notes by Todd ([here](https://ncatlab.org/nlab/show/tetracategory#references)) display higher dimensional diagrams, notice the triple arrows. The 3d diagram takes up several 2d pages.
(telling from what I studied of work of T. Trimble and A. E. Hoffnung, and from in particular Trimble's diagrams hosted by J. Baez) for 4-categories, all axioms necessary to construct these structures can be expressed by "schemes" whose underlying graphs happen to be planar.
Er? Those notes by Todd (here) display higher dimensional diagrams, notice the triple arrows. The 3d diagram takes up several 2d pages.
Format: MarkdownItexRe #2. Thanks for pointing out, I was referring to the planar pasting schemes which are connected by the triple-arrows. The "role" of planarity remains not clear (to me). I am aware that the skeleton associahedron (apparently) ceases to be a planar from from $n=6$ onwards. Naively, isn't it natural to * expect Power in his article to proactively comment on the role of planarity; his exposition is, roughly, "Bénabou introduced (something formalizable by) planar pasting schemes, and here is a proof that his concept is consistent/coherent." (Perhaps Power was assuming readers to be well-versed in the combinatorial attempts that he is hinting at in his introduction, Johnson etc.---I recognized this is a research paper, not a textbook) * wonder whether the use of embeddings into the plane can be dispensed with (very roughly, this is like the many examples of proving number theoretic results by using complex analysis); a purely combinatorial proof seems desirable * wonder whether expositions of Power's proof have been given (in particular in textbooks) Currently it seems to me that there is no alternative to simply thoroughly analyzing Power's proof, which I hope I will get round to doing soon.
Re #2. Thanks for pointing out, I was referring to the planar pasting schemes which are connected by the triple-arrows.
The "role" of planarity remains not clear (to me). I am aware that the skeleton associahedron (apparently) ceases to be a planar from from n=6n=6 onwards.
Naively, isn't it natural to
expect Power in his article to proactively comment on the role of planarity; his exposition is, roughly, "Bénabou introduced (something formalizable by) planar pasting schemes, and here is a proof that his concept is consistent/coherent." (Perhaps Power was assuming readers to be well-versed in the combinatorial attempts that he is hinting at in his introduction, Johnson etc.—I recognized this is a research paper, not a textbook)
wonder whether the use of embeddings into the plane can be dispensed with (very roughly, this is like the many examples of proving number theoretic results by using complex analysis); a purely combinatorial proof seems desirable
wonder whether expositions of Power's proof have been given (in particular in textbooks)
Currently it seems to me that there is no alternative to simply thoroughly analyzing Power's proof, which I hope I will get round to doing soon.
CommentAuthorTodd_Trimble
Author: Todd_Trimble
Format: MarkdownItexNot sure I'm following #1 either. In particular, what is meant by the underlying graph of a "scheme". There are various modes of presentation of higher categorical structure: parity complexes, pasting diagrams, pasting schemes, directed complexes, ... and I suppose Peter is keeping his options open by referring to any one of these as some notion of "scheme". (There is also a notion of computad, which for now I'll skip over. Oh, and of course string diagrams and higher-dimensional analogues of those.) It's been a very long time since I've looked over these four flavors of scheme -- I couldn't tell you the precise axioms for any one of them. But if "parity complex" is somewhat representative, then what is going on in terms of the data is this: * We have sets $C_n$, $n \geq 0$, whose elements are called $n$-cells. * We have relations $\partial_n^-, \partial_n^+: C_{n+1} \rightrightarrows P C_n$ where $\sigma \in \partial_n^\epsilon(\tau)$ is thought of as saying that the $n$-cell $\sigma$ sits on the boundary of the $(n+1)$-cell $\tau$, either on the "negative" or "positive" side of the boundary depending on the sign $\epsilon$. Intuitively I picture each $n$-cell $c$ as a polytope, i.e., an $n$-dimensional compact convex body obtained as an intersection of finitely many half-spaces of an affine space $\mathbb{R}^n$. (I think that of the four flavors, Richard Steiner's directed complexes come closest to capturing this Euclidean-geometric intuition, but I can't promise my memory is accurate.) But evidently these are "oriented" polytopes, with the intuitive sense that a cell $c$ is "moving" (or pushing, or deforming) its negative boundary to its positive one. Again, in this picture, if $c$ is a topological $n$-disk, then the cells of $\partial^-(c)$ fill out say the southern hemisphere of its boundary, and those of $\partial^+(c)$ the northern hemisphere. But then the orientations of the negative cells also move in a consistent direction, collectively pushing one half of the equator (a negative half) toward the other positive half, and so do the oriented positive cells. The "globularity axioms" of parity complexes spell out precisely what is meant by this. In my tetracategory diagrams, each triple arrow that transitions from one page to the next is supposed to be conceived as a local $3$-cell in a hom-tricategory, and is pictured as a 3-dimensional polytope. Or somewhat more accurately: for each transition here's a smallish part that is actually being moved or deformed, a bubble if you like, surrounded by a larger mass which doesn't move during the transition. What is going on here is that in each case we have a higher-dimensional whiskering. If you print those pages out and arrange them in proper order, you should be able to visualize those 3d bubbles more clearly.
Not sure I'm following #1 either. In particular, what is meant by the underlying graph of a "scheme".
There are various modes of presentation of higher categorical structure: parity complexes, pasting diagrams, pasting schemes, directed complexes, … and I suppose Peter is keeping his options open by referring to any one of these as some notion of "scheme". (There is also a notion of computad, which for now I'll skip over. Oh, and of course string diagrams and higher-dimensional analogues of those.)
It's been a very long time since I've looked over these four flavors of scheme – I couldn't tell you the precise axioms for any one of them. But if "parity complex" is somewhat representative, then what is going on in terms of the data is this:
We have sets C nC_n, n≥0n \geq 0, whose elements are called nn-cells.
We have relations ∂ n −,∂ n +:C n+1⇉PC n\partial_n^-, \partial_n^+: C_{n+1} \rightrightarrows P C_n where σ∈∂ n ε(τ)\sigma \in \partial_n^\epsilon(\tau) is thought of as saying that the nn-cell σ\sigma sits on the boundary of the (n+1)(n+1)-cell τ\tau, either on the "negative" or "positive" side of the boundary depending on the sign ε\epsilon.
Intuitively I picture each nn-cell cc as a polytope, i.e., an nn-dimensional compact convex body obtained as an intersection of finitely many half-spaces of an affine space ℝ n\mathbb{R}^n. (I think that of the four flavors, Richard Steiner's directed complexes come closest to capturing this Euclidean-geometric intuition, but I can't promise my memory is accurate.) But evidently these are "oriented" polytopes, with the intuitive sense that a cell cc is "moving" (or pushing, or deforming) its negative boundary to its positive one. Again, in this picture, if cc is a topological nn-disk, then the cells of ∂ −(c)\partial^-(c) fill out say the southern hemisphere of its boundary, and those of ∂ +(c)\partial^+(c) the northern hemisphere. But then the orientations of the negative cells also move in a consistent direction, collectively pushing one half of the equator (a negative half) toward the other positive half, and so do the oriented positive cells. The "globularity axioms" of parity complexes spell out precisely what is meant by this.
In my tetracategory diagrams, each triple arrow that transitions from one page to the next is supposed to be conceived as a local 33-cell in a hom-tricategory, and is pictured as a 3-dimensional polytope. Or somewhat more accurately: for each transition here's a smallish part that is actually being moved or deformed, a bubble if you like, surrounded by a larger mass which doesn't move during the transition. What is going on here is that in each case we have a higher-dimensional whiskering. If you print those pages out and arrange them in proper order, you should be able to visualize those 3d bubbles more clearly.
CommentAuthorMike Shulman
Author: Mike Shulman
Format: MarkdownItexI'm not sure this is relevant, but any pasting composite can be written as a lower dimensional pasting composite by "slicing". For instance, a 3-cell pasting in a 3-category $K$ can also be written as; * a 1-dimensional (linear) composite in a double-hom 1-category $K(A,B)(f,g)$, whose objects are 2-cells in $K$ and whose morphism are 3-cells in $K$, and also * a 2-dimensional pasting composite in a hom-2-category $K(A,B)$, whose objects are 1-cells in $K$, whose morphisms are 2-cells in $K$, and whose 2-cells are 3-cells in $K$. This representation displays less of the geometric structure of a pasting, but on the other hand it makes it simpler and easier to draw. In particular, of course, a 2-dimensional pasting diagram can be drawn on a 2-dimensional piece of paper, whereas a 3-dimensional one can't (other than in projection). This may explain why you see so many 2-dimensional pasting diagrams even in papers about 3-categories: they are the highest dimension (retaining the most geometric structure) that fit on a 2-dimensional piece of paper (or screen).
I'm not sure this is relevant, but any pasting composite can be written as a lower dimensional pasting composite by "slicing". For instance, a 3-cell pasting in a 3-category KK can also be written as;
a 1-dimensional (linear) composite in a double-hom 1-category K(A,B)(f,g)K(A,B)(f,g), whose objects are 2-cells in KK and whose morphism are 3-cells in KK, and also
a 2-dimensional pasting composite in a hom-2-category K(A,B)K(A,B), whose objects are 1-cells in KK, whose morphisms are 2-cells in KK, and whose 2-cells are 3-cells in KK.
This representation displays less of the geometric structure of a pasting, but on the other hand it makes it simpler and easier to draw. In particular, of course, a 2-dimensional pasting diagram can be drawn on a 2-dimensional piece of paper, whereas a 3-dimensional one can't (other than in projection). This may explain why you see so many 2-dimensional pasting diagrams even in papers about 3-categories: they are the highest dimension (retaining the most geometric structure) that fit on a 2-dimensional piece of paper (or screen).
Format: MarkdownItexThis reminds me too of something I learned from Street, that there is a way of defining a strict $\omega$-category as a [[globular set]] $C$ equipped with appropriate operations $$\circ^n_j: C_n \times_{C_j} C_n \to C_n, \qquad \iota^n_j: C_j \to C_n$$ whenever $j \lt n$, such that whenever $j \lt k \lt n$, the $2$-globular set $(C_j, C_k, C_n)$ together with the $\circ$'s and $\iota$'s with these indices $j, k, n$ form a $2$-category.
This reminds me too of something I learned from Street, that there is a way of defining a strict ω\omega-category as a globular set CC equipped with appropriate operations
∘ j n:C n× C jC n→C n,ι j n:C j→C n\circ^n_j: C_n \times_{C_j} C_n \to C_n, \qquad \iota^n_j: C_j \to C_n
whenever j<nj \lt n, such that whenever j<k<nj \lt k \lt n, the 22-globular set (C j,C k,C n)(C_j, C_k, C_n) together with the ∘\circ's and ι\iota's with these indices j,k,nj, k, n form a 22-category.
(edited Jul 22nd 2017)
Format: MarkdownItexMany thanks for your comments. So far, my understanding of "how essential" planarity is to all of this has not improved, only my understanding of some technicalities. But I am working on it, and have decided to write an exposition, and possibly extensions, of JAlg129, this being such a nice little island of genuine common ground between category theory and graph theory. I should hopefully soon be able to say more. For the time being, allow me to reiterate the implicit reference request from # 3: * do you know of expositions of JAlg129 ? (I would expect such a fundamental-yet-relatively-easy-result to have found its way at least into several, possibly unpublished ecture notes...) * do you know what Power means precisely by (JAlg129, p. 440) > However, the restriction of that [he refers to the thesis of Michael Johnson] work to 2-categories does not yield a theorem and proof with the flavour of that adumbrated in [5] [here Power refers to Kelly--Street LNM 420]. * do you know whether the thesis of Johnson is available? I have not yet looked into what University of Sydney offers in the way of making pre-electronic-times-theses available, but probably will, yet there may be easier ways. EDIT: another go at making a literature-search resulted in: M. Johnson thankfully makes it available himself, [here](http://comp.mq.edu.au/~mike/pubclas.html). This seems to be one of the rather rare cases of someone (me) *under*estimating the onset of the electronic age (of mathematical publishing, I mean). My question for whether there are expositions of Power's theorem remains. I have Johnson's JPAA 62 article before me (which is likely to be partly a publication of his thesis---while he does not make one of those formal this-was-a-thesis-statements), but actually studying JPAA62 and comparing it with Johnson's solution will have to wait. Superficially put, at the risk of missing the point, Johnson is effectively saying in his introduction that there was no proof of the pasting theorem simply because there was no *definition* of a pasting, while Power is effectively saying that many definitions had been tried without satisfactory results, and his is the first successful resolution of the problem. Like I said, I am surely missing something essential still, and hope to soon understand this more deeply.
Many thanks for your comments.
So far, my understanding of "how essential" planarity is to all of this has not improved, only my understanding of some technicalities.
But I am working on it, and have decided to write an exposition, and possibly extensions, of JAlg129, this being such a nice little island of genuine common ground between category theory and graph theory. I should hopefully soon be able to say more.
For the time being, allow me to reiterate the implicit reference request from # 3:
do you know of expositions of JAlg129 ? (I would expect such a fundamental-yet-relatively-easy-result to have found its way at least into several, possibly unpublished ecture notes…)
do you know what Power means precisely by (JAlg129, p. 440)
However, the restriction of that [he refers to the thesis of Michael Johnson] work to 2-categories does not yield a theorem and proof with the flavour of that adumbrated in [5] [here Power refers to Kelly–Street LNM 420].
do you know whether the thesis of Johnson is available? I have not yet looked into what University of Sydney offers in the way of making pre-electronic-times-theses available, but probably will, yet there may be easier ways. EDIT: another go at making a literature-search resulted in: M. Johnson thankfully makes it available himself, here. This seems to be one of the rather rare cases of someone (me) underestimating the onset of the electronic age (of mathematical publishing, I mean). My question for whether there are expositions of Power's theorem remains.
I have Johnson's JPAA 62 article before me (which is likely to be partly a publication of his thesis—while he does not make one of those formal this-was-a-thesis-statements), but actually studying JPAA62 and comparing it with Johnson's solution will have to wait.
Superficially put, at the risk of missing the point, Johnson is effectively saying in his introduction that there was no proof of the pasting theorem simply because there was no definition of a pasting, while Power is effectively saying that many definitions had been tried without satisfactory results, and his is the first successful resolution of the problem.
Like I said, I am surely missing something essential still, and hope to soon understand this more deeply. | CommonCrawl |
Convergence proof techniques
Convergence proof techniques are canonical components of mathematical proofs that sequences or functions converge to a finite limit when the argument tends to infinity.
There are many types of series and modes of convergence requiring different techniques. Below are some of the more common examples. This article is intended as an introduction aimed to help practitioners explore appropriate techniques. The links below give details of necessary conditions and generalizations to more abstract settings. The convergence of series is already covered in the article on convergence tests.
Convergence in Rn
It is common to want to prove convergence of a sequence $f:\mathbb {N} \rightarrow \mathbb {R} ^{n}$ or function $f:\mathbb {R} \rightarrow \mathbb {R} ^{n}$, where $\mathbb {N} $ and $\mathbb {R} $ refer to the natural numbers and the real numbers, and convergence is with respect to the Euclidean norm, $||\cdot ||_{2}$.
Useful approaches for this are as follows.
First principles
The analytic definition of convergence of $f$ to a limit $f_{\infty }$ is that[1] for all $\epsilon $ there exists a $k_{0}$ such for all $k>k_{0}$, $\|f(k)-f_{\infty }\|<\epsilon $. The most basic proof technique is to find such a $k_{0}$ and prove the required inequality. If the value of $f_{\infty }$ is not known in advance, the techniques below may be useful.
Contraction mappings
In many cases, the function whose convergence is of interest has the form $f(k+1)=T(f(k))$ for some transformation $T$. For example, $T$ could map $f(k)$ to $f(k+1)=Af(k)$ for some conformable matrix $A$. Alternatively, $T$ may be an element-wise operation, such as replacing each element of $f(k)$ by the square root of its magnitude.
In such cases, if the problem satisfies the conditions of Banach fixed-point theorem (the domain is a non-empty complete metric space) then it is sufficient to prove that $\|T(x)-T(y)\|<\|k(x-y)\|$ for some constant $|k|<1$ which is fixed for all $x$ and $y$. Such a $T$ is called a contraction mapping.
Example
Famous example of the use of this approach include
• If $T$ has the form $T(x)=Ax+B$ for some matrices $A$ and $B$, then convergence to $(I-A)^{-1}B$ occurs if the magnitudes of all eigenvalues of $A$ are less than 1.
Convergent subsequences
Every bounded sequence in $\mathbb {R} ^{n}$ has a convergent subsequence, by the Bolzano–Weierstrass theorem. If these all have the same limit, then the original sequence converges to that limit. If it can be shown that all of the subsequences of $f$ have the same limit, such as by showing that there is a unique fixed point of the transformation $T$, then the initial sequence must also converge to that limit.
Monotonicity (Lyapunov functions)
Every bounded monotonic sequence in $\mathbb {R} ^{n}$ converges to a limit.
This approach can also be applied to sequences that are not monotonic. Instead, it is possible to define a function $V:\mathbb {R} ^{n}\rightarrow \mathbb {R} $ such that $V(f(n))$ is monotonic in $n$. If the $V$ satisfies the conditions to be a Lyapunov function then $f$ is convergent. Lyapunov's theorem is normally stated for ordinary differential equations, but can also be applied to sequences of iterates by replacing derivatives with discrete differences.
The basic requirements on $V$ are that
1. $V(f(n+1))-V(f(n))<0$ for $f(n)\neq 0$ and $V(0)=0$ (or ${\dot {V}}(x)<0$ for $x\neq 0$)
2. $V(x)>0$ for all $x\neq 0$ and $V(0)=0$
3. $V$ be "radially unbounded", so that $V(x)$ goes to infinity for any sequence with $\|x\|$ that tends to infinity.
In many cases, a Lyapunov function of the form $V(x)=x^{T}Ax$ can be found, although more complex forms are also used.
For delay differential equations, a similar approach applies with Lyapunov functions replaced by Lyapunov functionals also called Lyapunov-Krasovskii functionals.
If the inequality in the condition 1 is weak, LaSalle's invariance principle may be used.
Convergence of sequences of functions
To consider the convergence of sequences of functions,[2] it is necessary to define a distance between functions to replace the Euclidean norm. These often include
• Convergence in the norm (strong convergence) -- a function norm, such as $ \|g\|_{f}=\int _{x\in A}\|g(x)\|dx$ is defined, and convergence occurs if $||f(n)-f_{\infty }||_{f}\rightarrow 0$. For this case, all of the above techniques can be applied with this function norm.
• Pointwise convergence -- convergence occurs if for each $x$, $f_{n}(x)\rightarrow f_{\infty }(x)$. For this case, the above techniques can be applied for each point $x$ with the norm appropriate for $f(x)$.
• uniform convergence -- In pointwise convergence, some (open) regions can converge arbitrarily slowly. With uniform convergence, there is a fixed convergence rate such that all points converge at least that fast. Formally, $\lim _{n\to \infty }\,\sup\{\,\left|f_{n}(x)-f_{\infty }(x)\right|:x\in A\,\}=0,$ where $A$ is the domain of each $f_{n}$.
See also
• Convergence of Fourier series
Convergence of random variables
Random variables[3] are more complicated than simple elements of $\mathbb {R} ^{n}$. (Formally, a random variable is a mapping $x:\Omega \rightarrow V$ from an event space $\Omega $ to a value space $V$. The value space may be $\mathbb {R} ^{n}$, such as the roll of a dice, and such a random variable is often spoken of informally as being in $\mathbb {R} ^{n}$, but convergence of sequence of random variables corresponds to convergence of the sequence of functions, or the distributions, rather than the sequence of values.)
There are multiple types of convergence, depending on the how the distance between functions is measured.
• Convergence in distribution -- pointwise convergence of the distribution functions of the random variables to the limit
• Convergence in probability
• Almost sure convergence -- pointwise convergence of the mappings $x_{n}:\Omega \rightarrow V$ to the limit, except at a set in $\Omega $ with measure 0 in the limit.
• Convergence in the mean
Each has its own proof techniques, which are beyond the current scope of this article.
See also
• Dominated convergence
• Carleson's theorem establishing the pointwise (Lebesgue) almost everywhere convergence of Fourier series of L2 functions
• Doob's martingale convergence theorems a random variable analogue of the monotone convergence theorem
Topological convergence
For all of the above techniques, some form the basic analytic definition of convergence above applies. However, topology has its own definition of convergence. For example, in a non-hausdorff space, it is possible for a sequence to converge to multiple different limits.
References
1. Ross, Kenneth. Elementary Analysis: The Theory of Calculus. Springer.
2. Haase, Markus. Functional Analysis: An Elementary Introduction. American Mathematics Society.
3. Billingsley, Patrick (1995). Probability and Measure. John Wesley.
| Wikipedia |
\begin{document}
\title{$L^1-$Theory for Incompressible Limit of Reaction-Diffusion \\ Porous Medium Flow with Linear Drift}
\author{Noureddine Igbida \thanks{Institut de recherche XLIM-DMI, UMR-CNRS 7252, Faculté des Sciences et Techniques, Université de Limoges, France. E-mail : {\tt [email protected]} . }}
\date{\today}
\maketitle
\begin{abstract} Our aim is to study existence, uniqueness and the limit, as $m\to\infty,$ of the solution of reaction-diffusion porous medium equation with linear drift $\displaystyle\partial_t u -\Delta u^m +\nabla \cdot (u \: V)=g(t,x,u) $ in bounded domain with Dirichlet boundary condition. We treat the problem without any sign restriction on the solution with an outpointing vector field $V$ on the boundary and a general source term $g$ (including the continuous Lipschitz case). By means of new $BV_{loc}$ estimates in bounded domain, under reasonably sharp Sobolev assumptions on $V$, we show uniform $L^1-$convergence towards the solution of reaction-diffusion Hele-Shaw flow with linear drift. \\
\textbf{MSC2020 database} : 35A01, 35A02, 35B20, 35B35
\end{abstract}
\section{Introduction and main results}
\subsection{Introduction}
Let $\Omega\subset\RR^N$ be a bounded open set with regular boundary $\displaystyle \partial\Omega =:\Gamma .$ Our aim here is to study the limit, as $m\to\infty,$ of the equation
\begin{equation}\label{eq0}
\displaystyle \frac{\partial u }{\partial t} -\Delta u^m +\nabla \cdot (u \: V)=g(t,x,u) \quad \hbox{ in } Q:= (0,T)\times \Omega ,
\end{equation}
where the expression $r^m$ denotes $\vert r\vert^{m-1}r,$ for any $r\in \RR,$ $1<m<\infty ,$ $V\: :\: \Omega\to \RR^N$ is a given vector field and $g\: :\: Q\times \RR\to \RR$ is a Carathéodory application.
There is a huge literature on qualitative and quantitative studies of \eqref{eq0} in the case where $V\equiv 0$. We refer the reader to the book \cite{Vbook} for a thoroughgoing survey of results as well as corresponding literature. In the case $V\not\equiv 0 , $ the PDE is a nonlinear version of Fokker-Planck equation of porous-media type. This kind of evolutionary problem has gained a significant attention in recent years. It arises mainly in biological applications and in the theory of population dynamics. Here $u$ represents density of agents traveling following a vector field $V$ and subject to some local random motion ; i.e. random exchanges at a microscopic level between the agent at a given position and neighborhoods positions (on can see for instance \cite{BeHi,MRS1,MRS2,MRSV,MRSV,PQV1,Ca,Di} and the refs therein for more complement on the applications and motivations).
Despite the broad results on nonlinear diffusion-transport PDE (cf. \cite{AltLu,AnIg1,AnIg2,Ca,IgUr,Ot}, see also the expository paper \cite{AnIgSurvey} for a complete list), the structure of \eqref{eq0} where the drift depends linearly on the density excludes this class definitely out the scope of the current literature. As far as we know, the study of existence and uniqueness of weak solution \eqref{eq0} has been investigated only for the case of one population in the conservative case leading to one phase problem in $\RR^N$ or else in bounded domain with Neumann boundary condition (cf. \cite{BeHi,Di} in the case $V=\nabla \varphi$ with reasonable assumptions on the potential $\varphi$ and $g\equiv 0$). One can see also \cite{KiLei} for the study in the framework of viscosity solutions. Asymptotic convergence to equilibrium is shown in \cite{BeHi} and \cite{CaJuMaTo} when $\varphi$ is convex. For the regularity of the solution one can see \cite{KZ} and the references therein.
In this paper, we focus chiefly on the case of bounded domain with Dirichlet boundary condition to study existence and uniqueness of a weak solution of the general formulation \eqref{eq0} as well as its limit, as $m\to\infty,$ under general reasonable assumptions on $g$ and $V.$ See that the diffusion term $-\Delta u^m $ may be written as $-\nabla \cdot \left (u\: \nabla \frac{m}{ m-1 } \vert u\vert^{m-1}\right),$ so that the exponent $m >1$ manage in some sense the mobility of the agent through a ''mobility potential'' given by $\frac{m}{m-1}\vert u\vert ^{m-1}.$ For large $m,$ this term becomes
$$ \frac{m}{m-1}\vert u\vert ^{m-1} \approx \left\{ \begin{array}{ll}
0\quad & \hbox{ if } \vert u\vert <1\\
+\infty & \hbox{ if } \vert u\vert >1 .
\end{array} \right. $$ This formal analysis setup that the limiting density $u $ is restrained to satisfy $\vert u\vert \leq 1$ within two main phases : the so called congestion phase which corresponds to $[\vert u\vert =1]$ and a free one corresponding to $\vert u\vert >0.$
More precisely, the limiting PDE system coincides at least formally with a density constrained diffusion equation with a linear drift
\begin{equation} \label{pdetypehs}
\left. \begin{array}{l}
\displaystyle \frac{\partial u }{\partial t} -\nabla\cdot( u\: \nabla \: \vert p\vert ) +\nabla \cdot (u \: V)=g(t,x,u) \\ u\in \sign(p)
\end{array}\right\} \quad \hbox{ in } Q,
\end{equation}
where we denote by $\sign$ the maximal monotone graph given by
$$\sign(r)= \left\{ \begin{array}{ll}
\displaystyle 1 &\hbox{ for }r>0\\
\displaystyle [-1,1]\quad & \hbox{ for } r=0\\
\displaystyle -1 &\hbox{ for }r<0. \end{array}\right. \quad \hbox{ for }r\in \RR.$$
See that $\nabla\cdot( u\: \nabla \: \vert p\vert ) =\Delta p,$ so that \eqref{pdetypehs} is a reaction-diffusion system of Hele-Shaw type with a linear drift. The emergence of density constraint $\vert u\vert \leq 1$ is closely connected to the microscopic non-overlapping constraint between the agents in the limiting case. The complementary condition between the density $u$ and the limiting mobility potential $p$ typically allows to describe the motion of congested zones definitely characterized by $[p\neq 0].$ This equation appears in pedestrian flow (cf. \cite{MRS1}) and in biological applications (cf. \cite{CaCrYa} and the references therein). The study of the problem without any sign restriction on the solution enables especially to cover mathematical models of two-species in inter-actions and occupying the same habitat, like diffusion-aggregation models. In this cases, $\rho$ represents through its positive and negative parts the densities of each specie respectively. The source term $g$ models reaction phenomena connected to agent supply in biological models. This happens in particular when one deals with reaction diffusion system coupling the equations \eqref{eq0} or \eqref{pdetypehs} with other PDE. As to the boundary condition, homogeneous Dirichlet one is connected to the possibility of mobility through the boundary (exits) without any charge. One can see \cite{EIG} for other possibilities of boundary condition and their interpretation.
In this paper, we give the proofs of existence, uniqueness and convergence process to Hele-Shaw flow with linear drift in the general context of $L^1-$theory for nonlinear PDE. The approach differs quite significantly from other recent papers which treats the problem in $\RR^N$ (cf. \cite{ Noemi,NoDePe,KiPoWo}) in the one phase case by using mainly classical Aronson-Bénilan estimate (cf. \cite{ArBe}) for nonnegative solutions of porous medium equation.
In particular, our approach enables to give answers and evidence to many questions left open in many papers dedicated to this subject. Actually, we treat the problem without any sign restriction on the solution in a bounded domain with Dirichlet boundary condition, low regularity on $V,$ and general source term $g.$ Moreover, the approach offers many supply for the treatment of
the challenging case of non compatible initial data ; i.e. the case where $\Vert u_0\Vert >1$. This will be treated separately in the forthcoming work \cite{Igpmsing}.
\subsection{Historical notes}
The study of the incompressible limit of \eqref{pm} received a lot of attention since its interest for the applications and for the description of constrained nonlinear flow. The problem is well understood by now in the case where $V$ and $g$ vanish (see for instance \cite{BeCr3} and \cite{BeBoHe}). One can see also \cite{Igshaw} for non-homogeneous Neumann boundary condition and \cite{GQ} for non-homogeneous Dirichlet one. In the case where $V\equiv 0$ and $g\not\equiv 0,$ it is know (see \cite{BeIgsing} for Dirichlet boundary boundary condition and \cite{BeIgNeumann} for Neuman boundary condition) that the solution of the problem
\begin{equation}\label{eq1}
\displaystyle \frac{\partial u }{\partial t} -\Delta u^m =g(.,u) \quad \hbox{ in } Q ,
\end{equation}
converges, as $m\to\infty,$ to the solution of the so called Hele-Shaw problem
\begin{equation} \label{hs1}
\left. \begin{array}{l}
\displaystyle \frac{\partial u }{\partial t} -\Delta p=g(.,u) \\ u\in \sign(p)
\end{array}\right\} \quad \hbox{ in } Q.
\end{equation}
The convergence holds to be true in $\mathcal{C}([0,T),L^1(\Omega))$ in the case where $\vert u_0\vert \leq 1,$ a.e. in $\Omega,$ otherwise it holds in $\mathcal{C}((0,T),L^1(\Omega))$ and a boundary layer appears for $t=0.$ This boundary layer is given by the plateau-like function refereed to as ‘mesa’, and it is given by the limit, as $m\to\infty,$ of the solution of homogeneous porous medium equation
\begin{equation} \label{pm}
\displaystyle \frac{\partial u }{\partial t} =\Delta u^m \quad \hbox{ in } Q.
\end{equation} Yet, one needs to be careful with the special case of Neumann boundary condition since, in this case the limiting problem \eqref{hs1} could be ill posed. With respect to the assumptions on $g,$ the limiting problem exhibits an extra phase to be mixed with the Hele-Shaw phase (see \cite{BeIgNeumann} for more details).
Other variations of reaction term have been proposed in recent years together
with the analysis of their incompressible limit (see for instance \cite{PQV1,NoPe,DiSc,KiPo} and the references therein).
The recent work \cite{GuKiMe} treats the particular case of linear reaction term with a special focus on the limit of the so called associated pressure $p:= \frac{m}{m+1} u^{m-1}$, furthermore the authors seem to be altogether not aware of the general works \cite{BeIgsing,BeIgNeumann}.
The treatment of the case where $V\not\equiv 0,$ leads to the formal reaction-diffusion dynamic of Hele-Shaw type with a linear drift ; i.e. \begin{equation} \label{hsg}
\left. \begin{array}{l}
\displaystyle \frac{\partial u }{\partial t} -\Delta p +\nabla \cdot (u \: V)=g(t,x,u) \\ u\in \sign(p)
\end{array}\right\} \quad \hbox{ in } (0,T)\times \Omega. \end{equation} The problem was studied first in \cite{BeIgconv} when $g\equiv 0$ and the drift term is of the type $\nabla \cdot F(u),$ with $F\: :\: \RR\to \RR^N$ a Lipschitz continuous function (this corresponds particularly to space-independent drift). In \cite{BeIgconv}, it is proven that $L^1(\Omega)$-compactness result remains to be true uniformly in $t.$ Moreover, the limiting problem here is simply the transport equation \begin{equation}\label{transport0} \partial_t u+\nabla \cdot F(u)=0. \end{equation} The Hele-Shaw flow desappears since the nature of the transport term (incompressible) in \eqref{transport0} compel the solution to be less than $1,$ and then $p\equiv 0.$ Then, in \cite{KiPoWo} the authors studied the case of space dependent drift and reaction terms both linear and regular in $\Omega=\RR^N.$ Assuming a monotonicity on $ V,$ and using the notion of viscosity solutions, the authors study the limit of nonnegative solution, as $m\to\infty$. The benefit of this approach is its ability to cover accurately the free boundary view of the limiting problem (particularly the dynamic of the so called congestion region $[p>0]$), as well as the rate of convergence. Using a weak (distributional) interpretation of the solution the same problem was studied recently in \cite{Noemi} with a variant of reaction term $g$ in $\Omega=\RR^N.$ Using a blend of recently developed tools on Aronson-Bénilan regularizing effect as well as sophisticated $L^p-$regularity of the pressure gradient the authors studied the incompressible limit in the case of nonnegative compatible initial data and regular drift (one can see also \cite{NoDePe} for some convergence rate in a negative Sobolev norm).
Here, we study the incompressible limit of \eqref{pm} subject to Dirichlet boundary condition and compatible initial data (even changing sign data). The reaction term satisfies general conditions, including Lipschitz continuous assumptions, and the given velocity field enjoys Sobloev regularity and an outpointing condition on the boundary that we'll precise after. To this aim we use $L^1-$nonlinear semi-group theory, more or less, in the same spirit of the approach of Bénilan and Crandall \cite{BeCr3}. This consists in performing first the $L^1-$strong compactness for the stationary problem and work with the general theory of nonlinear semi group to pass to the limit in the evolution problem. The $L^1-$compactness enroll a new $BV_{loc}-$estimate we perform for the stationary problem in bounded domain with reasonable assumptions on $V$ in the neighborhood of the boundary.
\subsection{Existence and uniqueness results}
We assume that $\Omega\subset\RR^N$ is a bounded open set, with regular boundary $\partial \Omega$ (say, piecewise $\mathcal{C}^2$). Throughout the paper, we assume that $V\in W^{1,2}(\Omega)$, $\nabla \cdot V\in L^\infty(\Omega) $ and satisfies the following outward pointing condition on the boundary :
\begin{equation}\label{HypV0}
V\cdot \nu \geq 0\quad \hbox{ on }\partial \Omega,
\end{equation} where $\nu$ represents the outward unitary normal to the boundary $\partial \Omega.$ Notice here that this condition is fundamental in the case of Dirichlet boundary condition. This assumption is natural in many applications. Even if it looks alike to be stronger, it is fundamental for the uniqueness of weak solution for the limiting problem. A counter example for uniqueness of weak solutions for a Hele-Shaw problem is given in \cite{Igshaw} whenever this condition is not fulfilled.
See here that $V\cdot \nu \in H^{-\frac{1}{2}}(\partial\Omega),$ so that \eqref{HypV0} needs to be understood a priori in a weak sense ; i.e.
\begin{equation}\label{Vhun}
\int_\Omega V\cdot \nabla \xi \: dx +\int_\Omega \nabla \cdot V\: \xi\: dx \geq 0,\quad \hbox{ for any }0\leq \xi\in H^1( \Omega).
\end{equation} To deal with this assumption, we operate technically with the euclidean distance-to-the-boundary function $d(.,\partial \Omega)$. For any $h>0,$ we denote by
\begin{equation}\label{xih}
\xi_h(t,x)=\frac{1}{h}\min\Big\{h,d (x,\partial \Omega)\Big\} \quad \hbox{ and } \quad \nu_h(x)=-\nabla \xi_h , \quad \hbox{ for any }x\in \Omega.
\end{equation}
We see that $ \xi_h \in H^1_0(\Omega) $, $0\leq \xi_h\leq 1$ in $\Omega$ and
$$ \nu_h(x) = - \frac{1}{h}\nabla\: d(.,\partial \Omega) ,\quad \hbox{ for any }x\in \Omega\setminus \Omega_h=:D_h \hbox{ and } 0<h\leq h_0 \hbox{ (small enough)},$$
where \begin{equation}
\Omega_h=\Big\{ x\in \Omega\: :\: d(x,\partial \Omega)>h \Big\},\quad \hbox{ for small }h>0.
\end{equation}
In particular, for any $x\in \Omega_h,$ we have
\begin{equation}\label{defnuh}
\nu_h(x) = \frac{1}{h}\nu(\pi(x)),
\end{equation} where $\pi(x)$ design the projection of $x$ on the boundary $\partial \Omega.$
Thanks to \eqref{Vhun}, we see that
\begin{equation}\label{HypV1}
\liminf_{h\to 0} \int_{\Omega\setminus \Omega_h} \xi\: V(x) \cdot \nu_h(x) \: dx \geq 0 , \quad \hbox{ for any }0\leq \xi \in H^1(\Omega).
\end{equation}
Nevertheless, to avoid much more technicality in the proof of of uniqueness, we'll assume that $V$ satisfies \eqref{HypV1} for any $0\leq \xi\in L^2(\Omega)$ (cf. Remark \ref{RemboundaryCond2}). We do not know if this is a consequence of the assumption \eqref{HypV0}. Anyway, this condition remains be to true for a large class of practical situations and implies necessarily \eqref{HypV0}.
\begin{remark}
Remember that, thanks to the local $\mathcal{C}^2-$boundary regularity assumption on $\Omega,$ for any $0\leq \Phi\in H^1_0(\Omega),$ we have
\begin{equation}\label{h10prop}
\liminf_{h\to 0} \int_\Omega \nabla \Phi\cdot \nabla \xi_h\: dx \leq 0 .
\end{equation} For the case of Lipschitz boundary domain, one needs to work with more sophisticated test functions in the spirit of $\xi_h$ to fill \eqref{h10prop} like property (one can see Lemma 4.4 and Remark 6.5 of \cite{AnIg2}. So, typical examples of vector fields $V$ which satisfies \eqref{HypV1} may be given by
\begin{equation}
V=-\nabla \Phi \quad \hbox{ and }\quad 0\leq \Phi\in H^1_0(\Omega)\cap W^{2,2}(\Omega).
\end{equation}
Here $H^1_0(\Omega)$ denotes the usual Sobolev space
$$H^1_0(\Omega) =\Big\{ u\in H^1(\Omega)\: :\ u=0, \: \: \mathcal L^{N-1}\hbox{-a.e. in }\partial \Omega \Big\} . $$
\end{remark}
We consider the evolution problem
\begin{equation} \label{pmef}
\left\{ \begin{array}{ll}
\displaystyle \frac{\partial u }{\partial t} -\Delta u^m +\nabla \cdot (u \: V)=f \quad & \hbox{ in } Q:= (0,T)\times \Omega\\ \\
\displaystyle u= 0 & \hbox{ on }\Sigma := (0,T)\times \partial \Omega\\ \\
\displaystyle u (0)=u _0 &\hbox{ in }\Omega.
\end{array} \right.
\end{equation}
\begin{definition}[Notion of solution] \label{defws} A function $u $ is said to be a weak solution of \eqref{pmef}
if $u \in L^2(Q)$, $p:=u^m\in L^2
\left(0,T;H^1_0(\Omega)\right)$ and
\begin{equation}
\label{evolwf}
\displaystyle \frac{d}{dt}\int_\Omega u \:\xi+\int_\Omega ( \nabla p - u \:V) \cdot \nabla\xi = \int_\Omega f\: \xi , \quad \hbox{ in }{\mathcal{D}}'(0,T),\quad \forall \: \xi\in H^1_0(\Omega).
\end{equation}
We'll say plainly that $u$ is a solution of \eqref{pmef} if $u\in \mathcal{C}([0,T),L^1(\Omega))$, $u(0)=u_0$ and $u$ is a weak solution of \eqref{pmef}.
\end{definition}
We denote by $\signp$ (resp. $\sign^-$) the maximal monotone graph given by $$\signp(r)= \left\{ \begin{array}{ll}
\displaystyle 1 &\hbox{ for }r>0\\
\displaystyle [0,1]\quad & \hbox{ for } r=0\\
\displaystyle 0 &\hbox{ for }r<0. \end{array}\right. \quad \hbox{ (resp. } \sign^-(r)=\sign^+(-r),\hbox{ for }r\in \RR).$$ Moreover, we denote by $\sign^{\pm}_0 $ the discontinuous applications defined from $\RR$ to $\RR$ by $$ \spo(r)= \left\{ \begin{array}{ll}
\displaystyle 1 &\hbox{ for }r>0\\
\displaystyle 0 &\hbox{ for }r\leq 0 \end{array}\right. \quad \hbox{ and } \sign^-_0(r)= \sign_0^+(-r),\hbox{ for }r\in \RR. $$
As we said above, to avoid more technicality of the proofs of existence and uniqueness of a weak solution, we assume throughout this section that $V$ satisfies the following outpointing condition on the boundary :
\begin{equation}\label{HypV}
\liminf_{h\to 0} \frac{1}{h} \int_{\Omega\setminus \Omega_h} \xi\: V(x) \cdot \nu(\pi(x)) \: dx \geq 0 , \quad \hbox{ for any }0\leq \xi \in L^2(\Omega).
\end{equation}
\begin{theorem} \label{tcompcmef}
If $u_1$ and $u_2$ are two weak solutions of \eqref{pmef} associated with $f_1,\ f_2\: \in L^1(Q)$ respectively, then there exists $\kappa\in L^\infty(Q)$ such that $\kappa\in \signp(u_1-u_2)$ a.e. in $Q$ and
\begin{equation}
\label{evolineqcomp}
\frac{d}{dt} \int_\Omega ( u _1-u _2)^+ \: dx \leq \int_\Omega \kappa \: ( f_1-f_2)\: dx ,\quad \hbox{ in }\mathcal{D}'(0,T).
\end{equation}
In particular, we have
\begin{enumerate}
\item $ \frac{d}{dt} \Vert u_1-u_2\Vert_{1} \leq \Vert f_1-f_2\Vert _{1},$ in $\mathcal{D}'(0,T).$
\item If $f_1\leq f_2,$ a.e. in $Q,$ and $u_1(0)\leq u_2(0) $ a.e. in $\Omega,$ then
$$u_1\leq u_2,\quad \hbox{ a.e. in }Q.$$
\end{enumerate}
\end{theorem}
\begin{theorem}\label{texistevolm} For any $u_0\in L^2(\Omega) $ and $f\in L^2(Q),$ the problem \eqref{pmef} has a solution $u.$
Moreover, $u$ satisfies the following :
\begin{enumerate}
\item For any $q\in [1,\infty],$ we have
\begin{equation} \label{lquevol}
\Vert u(t)\Vert_q \leq M_q:= \left\{
\begin{array}{lll}
e^{ (q-1)\: T\: \Vert (\nabla \cdot V)^-\Vert_\infty } \left( \Vert u_0 \Vert_q + \int_0^T \Vert f (t)\Vert_q \: dt \right ) \quad &\hbox{ if } & q<\infty \\
e^{ T \: \Vert (\nabla \cdot V)^-\Vert_\infty } \left( \Vert u_0 \Vert_\infty + \int_0^T \Vert f (t)\Vert_\infty \: dt \right)&\hbox{ if } & q=\infty
\end{array}\right. .
\end{equation}
\item For any $t\in [0,T),$ we have
\begin{equation}\label{lmuevol}
\begin{array}{c}
\frac{1}{m+1} \frac{d}{dt} \int_\Omega \vert u\vert ^{m+1} \: dx+ \int_\Omega \vert \nabla p\vert^2 \: dx\leq \int_\Omega f\:p \: dx + \int p \: u \: (\nabla \cdot V )^- \: dx, \quad \hbox{ in }\mathcal{D}'(0,T).
\end{array}
\end{equation}
\end{enumerate}
\end{theorem}
\begin{corollary}\label{cpositif}
For any $0\leq u_0\in L^2(\Omega) $ and $0\leq f\in L^2(Q),$ the problem \eqref{pmef} has a unique nonnegative solution $u$.
\end{corollary}
\begin{remark}\label{RemboundaryCond1}
The assumption \eqref{HypV} is technical for the the proof of Theorem \eqref{tcompcmef}. This assumption is fulfilled for a large class of vector field $V$, like for instance the case where $V$ is outward pointing in a neighborhood of the boundary. We think that this condition could be removed if favor of merely \eqref{HypV0} for instance if the solution $\rho$ has a trace on the boundary (for instance if one works with $BV$ solution). We postpone the technicality of this assumption in Remark \ref{RemboundaryCond1} after the proof of Theorem \ref{tcompcmef}.
\end{remark}
\subsection{Incompressible limit results}
As we said in the introduction, as $m\to\infty,$ the problem \eqref{pmef} converges formally to so called Hele-Shaw problem
\begin{equation} \label{evolhs0}
\left\{
\begin{array}{ll}\left.
\begin{array}{l}
\displaystyle \frac{\partial u }{\partial t} -\Delta p + \nabla \cdot (u\: V) =f \\
\displaystyle u \in \sign(p)\end{array}\right\}
\quad & \hbox{ in } Q \\ \\
\displaystyle p= 0 & \hbox{ on }\Sigma
\end{array} \right. \end{equation}
Existence, $L^1-$comparison and uniqueness of weak solution for the problem \eqref{evolhs0}, with mixed boundary conditions, has been studied recently in \cite{Igshuniq} (see also \cite{IgshuniqR}) under the assumption \eqref{HypV}. Thanks to \cite{Igshuniq}, we know that for any $f\in L^2(Q)$ and $u_0\in L^\infty(\Omega),$ s.t. $0\leq \vert u_0\vert \leq 1,$ a.e. in $\Omega,$ \eqref{evolhs0} has a unique weak solution (see the following Theorem for the precise sense) satisfying $u(0)=u_0.$
To prove rigorously the convergence of $u_m$ to the solution of \eqref{evolhs0}, we assume moreover that $V$ satisfies the following assumption : there exists $h_0>0,$ such that for any $0< h< h_0,$ we have
\begin{equation} \label{HypsupportV} V(x)\cdot \nu(\pi(x)) \geq 0,\quad \hbox{ for a.e. } x\in D_h. \end{equation}
This may be written also as $V(x)\cdot \nabla d(x,\partial\Omega) \leq 0,$ for any $x\in \Omega $ being such that $d(x,\partial\Omega) < h_0.$ See that the condition \eqref{HypsupportV} implies definitely \eqref{HypV}. In fact, with \eqref{HypsupportV}, we are assuming that $V$ is outpointing along the paths given by the distance function in a neighborhood of $\partial \Omega.$ As we will see, this assumption can be weaken into an outpointing vector field condition along a given arbitrary paths in the neighborhood of $\partial \Omega$ (cf. Remark \ref{Rbvcond}). Nevertheless, a control of the outpointing orientation of $V$ in the neighborhood of the boundary seems to be important in order to handle the oscillation of $u_m$ and establish $BV_{loc}$-estimate.
\begin{theorem}\label{treglimum} Under the assumption \eqref{HypsupportV}, for each $m=1,2,...,$ we consider $u_{0m}\in L^2(\Omega),$ $f_m\in L^2(Q)$ and $u_m$ be the corresponding solution of \eqref{pmef}.
If, as $m\to\infty,$ $ f_{ m} \to f $ in $L^1(Q),$ $u_{0m} \to u_0 $ in $L^1(\Omega), $ and $\vert u_{0}\vert \leq 1,$ then
\begin{equation}
u_m\to u,\quad \hbox{ in }\mathcal{C}([0,T);L^1(\Omega)),
\end{equation}
\begin{equation}
u_m^m\to p,\quad \hbox{ in }L^2([0,T);H^1_0(\Omega))\hbox{-weak},
\end{equation}
and $(u,p)$ is the unique solution of \eqref{evolhs0} satisfying $u(0)=u_0.$ That is $ u \in \mathcal{C}([0,T),L^1(\Omega)) ,$ $u(0)=u_0$, $u\in \sign(p) $ a.e. in $Q,$ and
\begin{equation}\label{weakformhs}
\frac{d}{dt} \int_\Omega u\: \xi+ \int_\Omega \nabla p\cdot \nabla \xi\: dx -\int_\Omega u\: V\cdot \nabla \xi\: dx =\int_\Omega\: f\: \xi\: dx,\quad \hbox{ in }\mathcal{D}'([0,T)), \hbox{ for any }\xi\in H^1_0(\Omega).
\end{equation} \end{theorem}
The results for the case where $f$ is given by a reaction term $g(.,u),$ are develop in Section \ref{Sreaction}.
\begin{remark} Thanks to \cite{Igshuniq}, we can deduce that $u,$ the limit of $u_m,$ satisfies the following : \begin{enumerate}
\item If there exists $\omega_1, \in W^{1,1}(0,T)$ (resp. $ \omega_2 \in W^{1,1}(0,T)$) such that $ u_0\leq \omega_2(0)$ (resp. $\omega_1(0)\leq u_0 $) and, for any $t\in (0,T),$
\begin{equation}\label{sup}
\dot \omega_2(t)+ \omega_2(t)\nabla \cdot V \geq f(t,.) \quad \hbox{ a.e. in }\Omega
\end{equation}
(rep. $\dot \omega_1(t)+ \omega_1(t)\nabla \cdot V \leq f(t,.),$ a.e. in $\Omega), $
then we have
\begin{equation}
u\leq \omega_2 \quad (\hbox{resp. }\omega_1\leq u)\quad \hbox{ a.e. in }Q.
\end{equation}
\item If $f$ and $V$ satisfies
\begin{equation}
0\leq f \leq \nabla \cdot V , \hbox{ a.e. in } Q
\end{equation}
then $p\equiv 0,$ and $u$ is the unique solution of the reaction-transport equation
\begin{equation}
\label{cmep0}
\left\{ \begin{array}{ll}
\left. \begin{array}{l}
\displaystyle \frac{\partial u }{\partial t} +\nabla \cdot (u \: V)= f\\
0\leq u \leq 1
\end{array}\right\} \quad & \hbox{ in } Q \\
\displaystyle u \: V\cdot \nu = 0 & \hbox{ on }\Sigma_N \\
\displaystyle u (0)=u _0 &\hbox{ in }\Omega,\end{array} \right.
\end{equation}
in the sense that $u \in \mathcal{C}([0,T),L^1(\Omega)),$ $0\leq u \leq 1$ a.e. in $Q$ and
\begin{equation}
\label{evolwg}
\displaystyle \frac{d}{dt}\int_\Omega u \:\xi- \int_\Omega u \:V \cdot \nabla\xi = \int_\Omega f\: \xi , \quad \hbox{ in }{\mathcal{D}}'(0,T), \quad \forall \: \xi\in H^1_0(\Omega).
\end{equation} \end{enumerate}
\end{remark}
\begin{remark}\label{remV}
One sees that the assumption \eqref{HypsupportV} is fulfilled for instance in the following cases :
\begin{enumerate}
\item $V$ satisfies \eqref{HypV0}, and there exists $h_0>0$ such that, for any $0<h<h_0,$ we have
\begin{equation}
V(x)=V(\pi(x)),\quad \hbox{ for any }x\in \Omega_h.
\end{equation}
Indeed, since $\nu_h(x)= \nu(\pi(x))$, we have $ V(x)\cdot \nabla \xi_h(x)= V(\pi(x))\cdot \nabla \xi(\pi(x)) $ which is nonnegative by the assumption \eqref{HypV0}.
\item $V$ compactly supported ; i.e. $V$ vanishes on a neighbor of the boundary $\partial \Omega.$
\end{enumerate}
\end{remark}
\begin{lemma}
Under the assumption \eqref{HypsupportV}, there exists $h_0>0,$ such that
for any $0<h<h_0,$ there exists $0\leq \omega_h\in \mathcal C^2(\Omega_h)$ compactly supported in $\Omega,$ such that $\omega_h \equiv 1$ in $ \Omega_h$ and
\begin{equation} \label{HypsupportV2}
\ \int_{\Omega\setminus \Omega_h} \varphi\: V\cdot \nabla \omega_h \: dx \leq 0,\quad \hbox{ for any }0\leq \varphi\in L^1(\Omega).
\end{equation}
\end{lemma}
\subsection{Plan of the paper}
The next section is devoted to the proof of $L^1-$comparison principle for weak solutions of \eqref{pmef}. To this aim, we use doubling and dedoubling variables techniques. This enables us to deduce the uniqueness and lay out the study plan of the equation in the framework of $L^1-$nonlinear semi-group theory.
Section 3 concerns the study of existence of a solution. To set the problem in the framework of nonlinear semi group theory, we begin with stationary problem to operate the Euler-implicit discretization and construct an $\varepsilon-$approximate solution $u_\varepsilon.$ Then, using mainly a Crandall-Ligget type theorem, $L^2(\Omega)$ and $H^1_0(\Omega)$ estimates on $u_\varepsilon$ and $u_\varepsilon^m$ respectively, we pass to the limit as $\varepsilon\to 0,$ to built $u_m$ the solution of the evolution problem \eqref{pmef}. Section 4 is devoted to the study of the limit as $m\to\infty.$ Using the outpointing vector filed condition \eqref{HypsupportV}, we study first the limit for the stationary problem connecting it to the the Hele Shaw flow with linear drift. To this aim, we establish $BV_{loc} $ new estimates for weak solutions in bounded domain. Then, using regular perturbation results for nonlinear semi group we establish the convergence results for the evolution problem. Section 6 is devoted to the study of the limit of the solution $u$ and $u^m$ in the of the presence of a reaction term with linear drift. We prove the convergence of reaction diffusion problem of a Hele-Shaw flow with linear drift
At last, in Section 7 (Appendix), we provide for the unaccustomed reader a short recap on the main tools from $L^1-$nonlinear semi-group theory.
\section{$L^1-$comparison principle and uniqueness proofs}
\setcounter{equation}{0}
As usual for parabolic-hyperbolic and elliptic-hyperbolic problems, the main tool to prove the uniqueness is doubling and de-doubling variables. To this aim, we prove first that a weak solution satisfies the following version of entropic inequality :
We assume throughout this section that $V\in W^{1,2}(\Omega)$ , $(\nabla \cdot V)\in L^\infty(\Omega) $ and $V$ satisfies the outpointing condition \eqref{HypV0}.
\begin{proposition}\label{pentropic}
Let $f\in L^1(Q)$ and $u$ be a weak solution of \eqref{pmef}. Then, for any $k\in \RR$, and $0\leq \xi\in H^1_0(\Omega)\cap L^\infty(\Omega),$ we have
\begin{equation}\label{entropic+}
\begin{array}{c}
\frac{d}{dt}\int_\Omega (u-k)^+ \xi\: dx + \int_\Omega (\nabla ({u^m}-k^m)^+ - (u-k)^+ V) \cdot \nabla \xi \: dx \\
+\int_\Omega ( k \: \nabla \cdot V -f ) \: \xi\: \spo(u-k) \: dx \leq -\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon}\int_{[0\leq u^m-k^m \leq \varepsilon]} \vert \nabla u^m\vert^2\: \xi\: dx , \end{array}
\end{equation} and
\begin{equation} \label{entropic-}
\begin{array}{c}
\frac{d}{dt}\int_\Omega (k-u)^+ \xi\: dx + \int_\Omega ( \nabla (k^m-{u^m})^+ - (k-u)^+ V) \cdot \nabla \xi \: dx \\
+\int_\Omega (f - k \: \nabla \cdot V ) \: \xi\: \spo(k-u) \: dx \leq -\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon}\int_{[0\leq k^m-u^m \leq \varepsilon]} \vert \nabla u^m\vert^2\: \xi\: dx, \end{array}
\end{equation}
in $ \mathcal{D}'(0,T) .$ \end{proposition}
\begin{proof} We extend $u$ onto $\displaystyle \RR\times\Omega$ by
$0$ for any $\displaystyle t\not\in (0,T).$ Then, for any $h>0$ and nonnegative $\xi \in H^1_0(\Omega)$ and $\psi\in \mathcal{D}(0,T),$ we consider
$$\displaystyle \Phi^h (t,x)= \xi(x)\: \frac{1}{h}\int_t^{t+h} \hepsp (u^m(s,x)\: \psi(s)\: ds ,\quad \hbox{ for a.e. }x\in \Omega, $$
where we extend $\psi$ onto $\RR$ by $0$, and $\hepsp$ is given by
\begin{equation} \label{hepsp}
\hepsp(r)= \min \left( \frac{(r-k^m)^+}{\varepsilon}, 1 \right),\quad \hbox{ for any }r\in \RR,
\end{equation}
for arbitrary $\sigma >0.$ It is clear that $\Phi_h \in W^{1,2} \Big( 0,T;H^1_0(\Omega) \Big)\cap L^\infty(Q)$ is an admissible test function for the weak formulation, so that
\begin{equation}\label{evolh0}
- \int\!\!\!\int _Q u \: \partial_t \Phi^h \: dtdx + \int\!\!\!\int _Q (\nabla u^m - V\: u ) \cdot \nabla \Phi^h\: dtdx \\ \\ = \int\!\!\!\int _Q f\: \Phi^h\: dtdx.
\end{equation}
See that
\begin{equation} \label{rel0}
\begin{array}{ll}
\int\!\!\!\int _Q u \: \partial_t \Phi^h \: dtdx&= \int\!\!\!\int _Q
\psi(t)\: \heps^+(u^m(t))\: \frac{ u(t-h)-u(t))}{h} \: \xi \: dtdx\\
&\leq \frac{1}{h} \int\!\!\!\int _Q
\psi(t)\: \left( \int_{u(t)}^{u(t-h)}\hepsp(r^m)\: d r\right) \: \xi\ \: dtdx \\
&\leq \frac{1}{h} \int\!\!\!\int _Q \left( \int_{k}^{u(t)} \hepsp (r^m)dr \right) \: ( \psi(t+h ) -\psi(t) ) \xi\: \: dtdx .
\end{array}
\end{equation} Letting $h\to 0,$ we have \begin{equation} \limsup_{h\to 0} \int\!\!\!\int _Q u \: \partial_t \Phi^h \: dtdx\leq \int\!\!\!\int _Q \left( \int_{k}^{u(t)} \hepsp (r^m)dr \right ) \: \partial_t\psi \: \xi\: dtdx. \end{equation}
So, by letting $h\to 0$ in \eqref{evolh0}, we get
\begin{equation}\label{inegproof1}
\begin{array}{c}
- \int\!\!\!\int _Q
\left\{ \Big( \int_{k}^{u(t)} \hepsp (r^m)dr \Big ) \: \partial_t\psi \: \xi + \psi\: \nabla u^m \cdot \nabla\xi \hepsp(u^m)\xi -
\hepsp(u^m) (u-k) \: V\cdot \nabla \xi \right\} \: dtdx \\
\leq \int\!\!\!\int _Q \Big\{ \psi\: (f +k \: \nabla \cdot V)\: \hepsp(u^m)\xi
+ \psi\: \xi (u-k) \: V\cdot \nabla \hepsp(u^m) \Big\} \: dtdx \\
- \frac{1}{\sigma}\int\!\!\!\int _{[0\leq u^m-k^m\leq \sigma]} \vert \nabla u^m\vert^2 \: \xi \: dtdx ,
\end{array} \end{equation} where we use the fact that $ \nabla u^m \cdot\nabla \hepsp(u^m))= \frac{1}{\sigma}\vert \nabla u^m\vert^2 \: \chi_{[0\leq u^m-k^m\leq \sigma]} $ a.e. in $Q.$ Setting $$\Psi_\sigma := \frac{1}{\sigma} \int_{\min(u^m,k^m)}^{\min(u^m,k^m+\sigma)} (r^{1/m}-k) \: dr,$$ we see that $$ (u-k) \heps'(u^m-k^m) \cdot \nabla u^m= \nabla \Psi_\sigma.$$
This implies that the last term of \eqref{inegproof1} satisfies \begin{eqnarray}
\int\!\!\!\int _Q \psi\: \xi (u-k) \: V\cdot \nabla \hepsp(u^m-k^m) &=& \int\!\!\!\int _Q \xi (u-k) {\hepsp}'(u^m-k^m) \: V\cdot \nabla u^m \\ &=& \int\!\!\!\int _Q \psi\:\xi \: V\cdot \nabla \Psi_\sigma dx
\\ &=& - \int\!\!\!\int _Q \psi\: \nabla\cdot( \xi \: V) \: \Psi_\sigma \: dx \\ \\
& \to& 0,\quad \hbox{ as }\sigma\to 0.
\end{eqnarray} See also that, by using Lebesgue’s dominated convergence Theorem, we have \begin{equation}
\limsup_{\varepsilon\to 0} \int\!\!\!\int _Q
\left( \int_{k}^{u(t)} \hepsp (r^m)dr \right ) \: \partial_t\psi \: \xi= \int\!\!\!\int _Q
(u(t) -k)^+ \: \partial_t\psi \: \xi. \end{equation}
Then, letting $\varepsilon \to 0$ in \eqref{inegproof1} and using the fact that $\sign_0^+(u^m-k^m)= \sign_0^+(u-k)$, for any $k\in \RR,$ we get \eqref{entropic+}. As to \eqref{entropic-}, it follows by using the fact that $-u$ is also a solution
of \eqref{pmef} with $f$ replaced by $-f,$ and applying \eqref{entropic+} to $-u$. \end{proof}
\begin{proposition}[Kato's inequality]\label{PKato}
If $u_1$ and $u_2$ satisfy \eqref{entropic+} and \eqref{entropic-} corresponding to $f_1\in L^1(Q)$ and $f_2\in L^1(Q)$ respectively, then
\begin{equation}\label{ineqkato}
\begin{array}{c}
\partial_ t ( u_1-u_2 )^+
- \Delta (u_1^m-u_2^m )^+ + \nabla \cdot \left(
( u_1-u_2 )^+ \: V \right) \leq ( f_1-f_2 )\: \sign^+_0(u_1-u_2) \,
\hbox{ in }\mathcal{D}'(Q).
\end{array}
\end{equation} \end{proposition}
\begin{proof}
The proof of this lemma is based on doubling and de-doubling variable techniques. Let us give here briefly the arguments. To double the variables, we use first the fact that $ u_1=u_1(t,x) $ satisfies \eqref{entropic+} with $k=u_1(s,y), $ we have
\begin{equation}
\begin{array}{c} \frac{d}{dt} \int (u_1(t,x)-u_2(s,y) )^+ \:
\zeta \: dx + \int (\nabla_x (u_1^m (t,x)-u_2^m(s,y))^+ - ( u_1(t,x)-u_2(s,y) )^+ \: V(x) \cdot \nabla_x\zeta \: dx \\ + \int_\Omega \nabla_x \cdot V \: u_2(s,y) \zeta \spo (u_1(t,x)-u_2(s,y) ) \: dx \leq \int f_1(t,x) \spo (u_1(t,x)-u_2(s,y) ) \: \zeta\: dx \\
-\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon}\int_{[0\leq u_1^m-u_2^m \leq \varepsilon]} \vert \nabla_x u_1^m(t,x)\vert^2\: \zeta\: dx . \end{array}
\end{equation} Integrating with respect to $y,$ we get
\begin{equation}
\begin{array}{c} \frac{d}{dt} \int\!\! \int (u_1(t,x)-u_2(s,y) )^+ \:
\zeta + \int\!\! \int (\nabla_x (u_1^m (t,x)-u_2^m(s,y))^+ - ( u_1(t,x)-u_2(s,y) )^+ \: V(x) \cdot \nabla_x\zeta \\ + \int\!\! \int \nabla_x \cdot V \: u_2(s,y) \zeta \spo (u_1(t,x)-u_2(s,y) ) \leq \int\!\! \int f_1(t,x) \spo (u_1(t,x)-u_2(s,y) ) \: \zeta \\
-\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon} \int\!\! \int _{[0\leq u_1^m-u_2^m \leq \varepsilon]} \vert \nabla_x u_1^m(t,x)\vert^2\: \zeta . \end{array} \end{equation}
See that
\begin{equation}
\begin{array}{c}
\int\!\! \int \nabla_y (u_1^m (t,x)-u_2^m(s,y))^+ \cdot \nabla_x\zeta \: dxdy = - \lim_{\varepsilon\to 0} \int\!\! \int \nabla_y u_2^m(s,y) \cdot \nabla_x\zeta \: H_\sigma (u_1^m (t,x)-u_2^m(s,y)) \: dxdy \\ \\
= - \lim_{\varepsilon\to 0} \frac{1}{\varepsilon} \int\!\! \int _{[0\leq u_1^m-u_2^m \leq \varepsilon]} \nabla_x u_1^m (t,x) \cdot \nabla_y u_2^m(s,y) \: \zeta \: dxdy ,
\end{array}
\end{equation}
so that, denoting by
$$u(t,s,x,y)= u_1(t,x) -u_2(s,y) ,\quad \hbox{ and } \quad p(t,s,x,y)= u_1^m (t,x)-u_2^m(s,y) ,$$
we obtain
\begin{equation}\label{eq00}
\begin{array}{c} \frac{d}{dt} \int\!\! \int u(t,s,x,y)^+ \:
\zeta \: dxdy + \int\!\! \int\Big\{ (\nabla_x+\nabla_y) p(t,s,x,y) - u(t,s,x,y) ^+ \: V(x)\Big\} \cdot \nabla_x\zeta \: dxdy \\ + \int\!\! \int \nabla_x \cdot V \: u_2(s,y) \: \zeta\: \spo u(t,s,x,y) \: dxdy \leq \int\!\! \int f_1(t,x) \spo u(t,s,x,y) \: \zeta \: dxdy \\
- \lim_{\varepsilon\to 0} \int\!\! \int \nabla_x u_1^m (t,x) \cdot \nabla_y u_2^m(s,y) \: \zeta \: H'_\varepsilon(u_1^m (t,x)-u_2^m(s,y)) \: dxdy \\
-\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon} \int\!\! \int _{[0\leq u_1^m-u_2^m \leq \varepsilon]} \vert \nabla_x u_1^m(t,x)\vert^2\: \zeta \: dxdy . \end{array}
\end{equation}
On the other hand, using the fact that $ u_2=u_2(s,y)$ satisfies \eqref{entropic-} with $k=u_1(t,x),$ we have
\begin{equation}
\begin{array}{c}
\frac{d}{ds} \int u(t,s,x,y)^+ \:
\zeta \: dy + \int \nabla_y p(t,s,x,y) - u(t,s,x,y) ^+ \: V(y) \cdot \nabla_y\zeta \\ - \int_\Omega \nabla_y \cdot V \: u_1(t,x) \zeta \spo (u(t,s,x,y)) \: dy \leq - \int f_2(s,y) \spo ( u(t,s,x,y) ) \: \zeta \: dy \\ -\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon} \int _{[0\leq u_1^m-u_2^m \leq \varepsilon]} \Vert \nabla_y u_2^m(s,y)\Vert^2\: \zeta \: dy .\end{array}
\end{equation}
Working in the same way for \eqref{eq00}, we get
\begin{equation}
\begin{array}{c}
\frac{d}{ds} \int\!\! \int u(t,s,x,y)^+ \:
\zeta \: dxdy + \int\!\! \int \Big\{ (\nabla_x+\nabla_y) p(t,s,x,y) - u(t,s,x,y) ^+ \: V(y) \Big\} \cdot \nabla_y\zeta \: dxdy \\ - \int\!\! \int \nabla_y \cdot V(y) \: u_1(t,x) \: \zeta\: \spo ( u(t,s,x,y) ) \: dxdy \leq - \int\!\! \int f_2(s,y) \spo (u(t,s,x,y) ) \: \zeta \: dxdy \\
- \lim_{\varepsilon\to 0} \frac{1}{\varepsilon} \int\!\! \int _{[0\leq u_1^m-u_2^m \leq \varepsilon]} \nabla_x u_1^m (t,x) \cdot \nabla_y u_2^m(s,y) \: \zeta \: dxdy \\ -\limsup_{\varepsilon\to 0 }\frac{1}{\varepsilon} \int\!\! \int _{[0\leq u_1^m-u_2^m \leq \varepsilon]} \vert \nabla_y u_2^m(s,y)\vert^2\: \zeta \: dy \: dxdy . \end{array}
\end{equation}
Adding both inequalities, and using the fact that
\begin{equation}
-(\Vert \nabla_x u_1(t,x)\Vert ^2 +\Vert \nabla_y u_2(s,y)\Vert ^2 +2\: \nabla_x u_1 (t,x) \cdot \nabla_y u_2(x,y))\: \chi_{[0\leq u_1^m-u_2^m \leq \varepsilon]} \leq 0 , \hbox{ a.e. in } Q^2,
\end{equation}
we obtain
\begin{equation}\label{formdoubling1}
\begin{array}{c} \left( \frac{d}{dt} + \frac{d}{ds} \right) \int\!\! \int u(t,s,x,y) ^+ \:
\zeta \: dxdy+ \int\!\! \int (\nabla_x+\nabla_y) p(t,s,x,y) \cdot (\nabla_x +\nabla_y)\zeta \: dxdy \\ - \int\!\! \int u(t,s,x,y) ^+ \: ( V(x) \cdot \nabla_x\zeta +V(y) \cdot \nabla_y\zeta) \: dxdy \\ + \int\!\! \int \left( \nabla_x \cdot V(x) \: u_2(s,y) \: dxdy - \nabla_y \cdot V(y) \: u_1(t,x) \right) \: \zeta\: \spo (u(t,s,x,y) ) \: dxdy \\
\leq \int\!\! \int ( f_1(t,x) - f_2(s,y) ) \spo ( u(t,s,x,y) ) \: \zeta \: dxdy \end{array}
\end{equation} and then,
\begin{equation}\label{formdoubling1}
\begin{array}{c} \left( \frac{d}{dt} + \frac{d}{ds} \right) \int\!\! \int u(t,s,x,y) ^+ \:
\zeta \: dxdy + \int\!\! \int (\nabla_x+\nabla_y) p(t,s,x,y) \cdot (\nabla_x +\nabla_y)\zeta \: dxdy \\ - \int\!\! \int u(t,s,x,y) ^+ \: V(x) \cdot ( \nabla_x\zeta + \nabla_y\zeta ) \: dxdy + \int\!\! \int u(t,s,x,y) ^+ \: ( V(x)-V(y) \cdot \nabla_y\zeta \: dxdy \\ + \int\!\! \int \left( \nabla_x \cdot V(x) \: u_2(s,y) - \nabla_y \cdot V(y) \: u_1(t,x) \right) \: \zeta\: \spo (u(t,s,x,y) ) \: dxdy \\
\leq \int\!\! \int ( f_1(t,x) - f_2(s,y) ) \spo ( u(t,s,x,y) ) \: \zeta \: dxdy . \end{array} \end{equation}
Now, we can de-double the variables $t$ and $s,$ as well as $x$ and $y,$ by taking as usual the sequence of test functions
\begin{equation}
\psi_\varepsilon(t,s) = \psi\left( \frac{t+s}{2}\right) \rho_\varepsilon \left( \frac{t-s}{2}\right) \hbox{ and }
\zeta_\lambda (x,y) = \xi\left( \frac{x+y}{2}\right) \delta_\lambda \left( \frac{x-y}{2}\right),
\end{equation}
for any $t,s\in (0,T)$ and $x,y\in \Omega.$ Here $\psi\in \mathcal{D}(0,T),$ $\xi\in \mathcal{D}(\Omega),$ $\rho_\varepsilon$ and $\delta_\lambda$ are sequences of usual mollifiers in $\RR$ and $\RR^N$ respectively.
See that
$$ \left( \frac{d}{dt} + \frac{d}{ds} \right) \psi_\varepsilon(t,s) = \rho_\varepsilon\left( \frac{t-s}{2}\right) \dot \psi \left( \frac{t+s}{2}\right) $$
and
$$(\nabla_x+ \nabla_y) \zeta_\lambda (x,y) = \delta_\lambda \left( \frac{x-y}{2}\right) \nabla \xi\left( \frac{x+y}{2}\right) $$
Moreover, for any $h\in L^1((0,T)^2\times \Omega^2)$ and $\Phi\in L^1((0,T)^2\times \Omega^2)^N,$ we know that
\begin{itemize}
\item $\lim_{\lambda \to 0 }\lim_{\varepsilon \to 0 } \int_0^T\!\! \int_0^T\!\! \int_\Omega\!\! \int_\Omega h(t,s,x,y)\:\zeta_\lambda(x,y)\: \rho_\varepsilon(t,s) = \int_0^T\!\! \int_\Omega h(t,t,x,x)\: \xi(x)\: \psi(t) .$
\item $\lim_{\lambda \to 0 }\lim_{\varepsilon \to 0 } \int_0^T\!\! \int_0^T\!\! \int_\Omega\!\! \int_\Omega h(t,s,x,y)\:\zeta_\lambda(x,y)\: \left(\frac{d}{dt} + \frac{d}{ds}\right)\rho_\varepsilon(t,s) = \int_0^T\!\! \int_\Omega h(t,t,x,x)\: \xi(x)\: \dot \psi(t) .$
\item $\lim_{\lambda \to 0 }\lim_{\varepsilon \to 0 } \int_0^T\!\! \int_\Omega\!\! \int_\Omega \Phi(t,s,x,y) \cdot (\nabla_x + \nabla_y) \zeta_\lambda(x,y) \: \rho_\varepsilon(t,s) = \int_0^T\!\!\int_\Omega \Phi(t,t,x,x)\cdot \nabla \xi(x)\: \psi(t)\: dtdx .$
\end{itemize} Moreover, we also know that \begin{equation}
\lim_{\lambda \to 0 } \int_0^T\!\! \int_\Omega\!\! \int_\Omega h(t,x,y) \: (V(x) -V(y)) \cdot \nabla_y \zeta_\lambda(x,y) \: \psi(t)\: dtdxdy \end{equation} \begin{equation} \label{inqtech1}
= \int_0^T\!\!\int_\Omega h(x,x) \: \nabla \cdot V(x) \: \xi(x)\: \psi(t)\: dtdx . \end{equation} The proof of this result is more or less well known by now (one can see for instance a detailed proof in \cite{Igshuniq}).
So replacing $\zeta$ and $\psi$ in \eqref{formdoubling1} by $\zeta_\lambda$ and $\psi_\varepsilon$ resp., and, letting $\varepsilon,\: \lambda \to 0$, we get
\begin{equation}\label{de-double }
\begin{array}{c}
\int_0^T\!\! \int_\Omega \Big\{ - (u_1-u_2 )^+ \:
\xi \: \dot \psi + \nabla (p_1-p_2)^+ \cdot \nabla \xi\: \psi
- (u_1-u_2 )^+ \: V \cdot \nabla \xi \: \psi \Big\} \: dtdx \\ \leq \int_0^T\!\! \int_\Omega (f_1-f_2)\: \sign^+_0(u_1-u_2) \: \xi \: \psi \: dtdx . \end{array}
\end{equation}
Thus
\begin{equation}\label{dedouble}
\begin{array}{c}
\frac{d}{dt} \int (u_1-u_2 )^+ \:
\xi \: dx + \int \nabla (u_1^m (t,x)-u_2^m(t,x))^+ \cdot \nabla \xi \: dx
- \int (u_1-u_2 )^+ \: V \cdot \nabla \xi \: dx \\ \leq \int \kappa (x) (f_1-f_2) \: \xi \: dx .
\end{array}
\end{equation} Thus the result of the proposition. \end{proof}
The aim now is to process with the sequence of test function $\xi_h$ given by \eqref{xih} in Kato's inequality and let $h\to 0,$ to cover \eqref{evolineqcomp}.
\begin{proof}[\textbf{Proof of Theorem \ref{tcompcmef}}] Let $(u_1,p_1)$ and $(u_2,p_2)$ be two couples of $L^\infty(Q) \times L^2 \left(0,T;H^1_0(\Omega)\right) $ satisfying \eqref{entropic+} and \eqref{entropic-} corresponding to $f_1\in L^1(Q)$ and $f_2\in L^1(Q)$ respectively, to prove \eqref{evolineqcomp} we see that
\begin{eqnarray}
& \frac{d}{dt} \int_\Omega ( u_1-u_2 )^+ \: dx - \int (f_1-f_2) \: \sign_0^+(u_1-u_2)\: dx \\ & = \lim_{h\to 0} \: \: \underbrace{ \frac{d}{dt} \int_\Omega
( u_1-u_2 )^+ \, \xi_h \: dx -\int (f_1-f_2)\: \sign_0^+(u_1-u_2)\: \xi_h \: dx
}_{I(h) } ,
\end{eqnarray}
in the sense of distribution in $[0,T).$ Taking $\xi_h$ as a test function in \eqref{ineqkato} and using \eqref{h10prop}, we have
\begin{equation}\label{inqxih} \begin{array}{ll}
I(h)
& \leq - \int \left( \nabla ( u_1^m -u_2^m)^+ -
( u_1-u_2 )^+ \: V \right) \cdot\nabla \xi_h \: dx \\
& \leq \int
( u_1-u_2 )^+ \: V \cdot\nabla \xi_h \: dx .
\end{array}
\end{equation} Then, using the outpointing velocity vector field assumption \eqref{HypV}, we get
\begin{equation}\label{limintegral}
\begin{array}{ll}
\lim_{h\to 0}I(h) &\leq -\lim_{h\to 0} \int
( u_1-u_2 )^+ \: V \cdot \nu_h(x) \: dx\\ \\
&\leq 0.
\end{array}
\end{equation}
Thus \eqref{evolineqcomp}. The rest of the theorem is a straightforward consequence of \eqref{evolineqcomp}. \end{proof}
\begin{remark}\label{RemboundaryCond2}
See that \eqref{limintegral} is the lonely step of the proof of Theorem \ref{tcompcmef} where we use assumption \eqref{HypV}. Working so enables to avoid all the technicality related to
doubling and de-doubling variable à la Carillo (\cite{Ca}) by using test functions which do not vanish on the boundary.
We do believe that the result of Theorem \ref{tcompcmef} remains to be true under the general assumption \eqref{HypV0}. One sees also that, if the solutions have a trace (like for $BV$ solution), one can weaken this condition by handling \eqref{limintegral} otherwise. \end{remark}
\section{Main estimates and existence proofs} \setcounter{equation}{0}
\subsection{Stationary problem} \label{Sapprox}
To prove Theorem \ref{texistevolm}, we consider the stationary problem associated with Euler-implicit discretization of \eqref{pmef}. That is
\begin{equation} \label{st}
\left\{ \begin{array}{ll}
\displaystyle v -\lambda\Delta v^m + \lambda \nabla \cdot (v \: V)=f
\quad & \hbox{ in } \Omega\\
\displaystyle v= 0 & \hbox{ on } \partial \Omega,\end{array} \right.
\end{equation}
where $\displaystyle f \in L^2(\Omega)$ and $\lambda>0$. Following Definition \ref{defws}, a function $v \in L^1(\Omega)$ is said to be a weak solution of \eqref{st} if $v^m\in H^1_0(\Omega)$ and
\begin{equation}\label{stwf}
\displaystyle \int_\Omega v\:\xi+ \lambda \int_\Omega \nabla v^m \cdot \nabla\xi -
\lambda \int_\Omega v \: V\cdot \nabla \xi = \int_\Omega f\: \xi, \quad \hbox{ for all } \xi\in H^1_0(\Omega).
\end{equation}
\begin{theorem} \label{texistm}
Assume $V\in W^{1,2}(\Omega)$ and $(\nabla \cdot V)^-\in L^\infty(\Omega) $. For $f\in L^2(\Omega)$ and $\lambda$ satisfying
\begin{equation}\label{condlambda1}
0<\lambda < \lambda_0:= 1/\Vert (\nabla\cdot V)^-\Vert_\infty ,
\end{equation}
the problem \eqref{stwf} has a solution $v$ that we denote by $v_m.$ Moreover,
for any $1\leq q\leq \infty,$
we have
\begin{equation} \label{lqstat}
\Vert v_m\Vert_q\leq
\left\{\begin{array}{ll}
\Big( 1- (q-1) \lambda /\lambda_0 \Big)^{-1}\Vert f\Vert_q, \quad &\hbox{ if }1\leq q<\infty \\ \\
\Big( 1- \lambda/\lambda_0 \Big) ^{-1} \Vert f\Vert_\infty, &\hbox{ if } q=\infty
\end{array}\right.
\end{equation}
and
\begin{equation}\label{lmst}
\Big( 1-\lambda/\lambda_0 \Big ) \int \vert v_m\vert ^{m+1} \: dx + \lambda \int \vert \nabla v_m^m \vert^2\: dx \leq \int f\: v_m^m \: dx .
\end{equation}
\end{theorem}
Moreover, thanks to Theorem \ref{tcompcmef}, we have
\begin{corollary}\label{ccomp}
Under the assumption of Theorem \ref{texistm}, if moreover $V$ satisfies the outpointing condition \eqref{HypV}, the problem \eqref{st} has a unique solution. Moreover, if $v _1$ and $v _2$ are two solutions associated with $f_1\in L^1(\Omega)$ and $f_2\in L^1(\Omega) $ respectively, then
\begin{equation}
\label{ineqcomp}
\Vert (v_1-v_2)^+\Vert_{1} \leq \Vert (f_1-f_2)^+\Vert _{1}
\end{equation}
and
$$ \Vert v_1-v_2\Vert_{1} \leq \Vert f_1-f_2\Vert _{1}.$$
\end{corollary}
\begin{proof}
This is a simple consequence of the fact that if $(v,p)$ (which is independent of $t$) is a solution of \eqref{st}, then it can be assimilated to a time-independent solution of the evolution problem \eqref{pmef} with $f$ replaced by $f-v$ (which is also independent of $t$). \end{proof}
To prove Theorem \ref{texistm}, we proceed by regularization and compactness. For each $\varepsilon >0,$ we consider $\beta_\varepsilon $ a regular Lipschitz continuous function strictly increasing satisfying $\beta_\varepsilon(0)=0$ and, as $\varepsilon\to0,$
$$\beta_\varepsilon(r)\to r^{1/m},\quad \hbox{ for any }r\in \RR. $$
One can take, for instance, $\beta_\varepsilon$ the regularization by convolution of the application $r\in \RR\to r^{1/m}.$ Then, we consider the problem
\begin{equation} \label{pstbeta}
\left\{ \begin{array}{ll}\left.
\begin{array}{l}
\displaystyle v - \lambda\Delta p + \lambda \nabla \cdot (v \: V)=f \\
\displaystyle v =\beta_\varepsilon (p) \end{array}\right\}
\quad & \hbox{ in } \Omega\\ \\
\displaystyle p= 0 & \hbox{ on } \partial \Omega .\end{array} \right.
\end{equation}
\begin{lemma}\label{lexistreg}
For any $f\in L^2(\Omega)$ and $\varepsilon>0,$ the problem \eqref{pstbeta} has a solution $v_\varepsilon ,$ in the sense that $v_\varepsilon \in L^2(\Omega),$ $p_\varepsilon:=\beta_\varepsilon^{-1} (u_\varepsilon) \in H^1_0(\Omega),$ and
\begin{equation}
\label{weakeps}
\int v_\varepsilon \: \xi \: dx + \lambda \int \nabla p_\varepsilon \cdot \nabla \xi \: dx -\lambda\int v_\varepsilon \: V\cdot \nabla \xi \: dx =\int f\: \xi\: dx, \end{equation}
for any $\xi\in H^1_0(\Omega).$ Moreover, for any $\lambda$ satisfying \eqref{condlambda1}
the solution $v_\varepsilon$ satisfies the estimates
\begin{equation} \label{lqstateps}
\Vert v_\varepsilon \Vert_q\leq
\left\{\begin{array}{ll}
\Big( 1- (q-1) \lambda /\lambda_0 \Big)^{-1}\Vert f\Vert_q, \quad &\hbox{ if }1\leq q<\infty \\ \\
\Big( 1- \lambda/\lambda_0 \Big) ^{-1} \Vert f\Vert_\infty, &\hbox{ if } q=\infty
\end{array}\right.
\end{equation} and
\begin{equation} \label{lmsteps}
\Big( 1-\lambda/\lambda_0 \Big ) \int v_\varepsilon\: p_\varepsilon \: dx + \lambda \int \vert \nabla p_\varepsilon \vert^2\: dx \leq \int f\: p_\varepsilon \: dx .
\end{equation}
\end{lemma}
\begin{proof}
We can assume without loss of generality throughout the proof that $\lambda=1$ and remove the script $\varepsilon$ in the notations of $(v_\varepsilon,p_\varepsilon)$ and $\beta_\varepsilon$. We consider $H^{-1} (\Omega) $ the usual topological dual space of $H^{1}_0(\Omega)$ and $ \langle .,. \rangle$ the associate dual bracket. See that the operator $A\: :\: H^{1}_0(\Omega)\to H^{-1} (\Omega) ,$ given by
$$ \langle A p,\xi\rangle = \int\beta (p)\: \xi\: dx + \int \nabla p\cdot \nabla \xi \: dx -\int\beta (p) \: V\cdot \nabla \xi \: dx,\quad \hbox{ for any }\xi,\: p\in H^1_0(\Omega), $$
is a bounded weakly continuous operator. Moreover, $A$ is coercive. Indeed, for any $p\in H^{1}_0(\Omega),$ we have
\begin{eqnarray}
\langle A p,p\rangle &=& \int\beta (p)\: p \: dx + \int \vert \nabla p\vert^2 \: dx -\int\beta (p) \: V\cdot \nabla p \: dx \\
&=& \int\beta (p)\: p \: dx + \int \vert \nabla p\vert^2 \: dx -\int V\cdot \nabla\left( \int_0^p \beta (r) dr\right) \: dx \\
&=& \int\beta (p)\: p \: dx + \int \vert \nabla p\vert^2 \: dx +\int \nabla \cdot V\: \left( \int_0^p \beta (r) dr\right) \: dx \\
&\geq& \int\beta (p)\: p \: dx + \int \vert \nabla p\vert^2 \: dx - \int {(\nabla \cdot V)^{-}} \: p \beta (p) \: dx \\
&\geq& \frac{1}{2} \int\beta (p)\: p \: dx + \int \vert \nabla p\vert^2 \: dx - \frac{1}{2}\int {(\nabla \cdot V)^{-}}^2 \: dx\\
&\geq& \int \vert \nabla p\vert^2 \: dx - \frac{1}{2}\int {(\nabla \cdot V)^{-}}^2 \: dx ,
\end{eqnarray}
where we use Young inequality. So, for any $f\in H^{-1} (\Omega) $ the problem $A p=f$ has a solution $p \in H^1_0(\Omega).$
Now, for each $1<q<\infty,$ taking $ v ^{q-1} $ as a test function, and using the fact that
$$v\nabla ( v ^{q-1} )= \frac{q-1}{q}\: \nabla \vert v\vert^q ,\quad \hbox{ a.e. in }\Omega $$
and
$$ \nabla p\cdot \nabla ( v ^{q-1} ) \geq 0, $$
we get
\begin{eqnarray}
\int \vert v\vert^q \: d x
&\leq& \int f\: v ^{q-1} \: dx + \lambda \frac{q-1}{q} \int V\cdot \nabla \vert v\vert ^q \: dx
\\
&\leq& \int f v ^{q-1 } \: dx - \lambda \frac{q-1}{q} \int \nabla \cdot V\: \vert v\vert ^q \: dx \\
&\leq& \int f v ^{q-1 } \: dx + \lambda \frac{q-1}{q} \int \left( \nabla \cdot V\right)^- \: \vert v\vert ^q \: dx \\
&\leq& \frac{1}{q} \int \vert f\vert^q \: dx + \frac{q-1}{q} \int \vert v\vert^q \: dx + \lambda \frac{q-1}{q} \left\Vert \left( \nabla \cdot V\right)^- \right\Vert_\infty \int \: \vert v\vert ^q \: dx,
\end{eqnarray}
where we use again Young inequality. This implies that
\begin{equation}
\left( 1-\lambda (q-1) \: \Vert (\nabla \cdot V)^-\Vert_\infty \right) \int \vert v\vert ^q\: dx \leq \int \vert f\vert ^q\: dx.
\end{equation}
Thus \eqref{lqstat}.
To prove \eqref{lmst}, we take $p$ as a test function, we obtain
\begin{eqnarray}
\lambda \int \vert \nabla p\vert^2\: dx &=& \int fp\: dx -\int vp\: dx +\lambda \int \beta (p)V\cdot \nabla p\: dx \\
&=& \int fp\: dx -\int vp\: dx +\lambda \int V\cdot \nabla \left( \int_0^p \beta (r) dr\right) \: dx \\
&=& \int fp\: dx -\int vp\: dx -\lambda \int \nabla\cdot V \: \left( \int_0^p \beta (r) dr\right) \: dx \\
&\leq& \int fp\: dx -\int vp\: dx + \lambda \int (\nabla \cdot V)^- \: \int_0^ p\beta (r)dr \: dx
\\
&\leq& \int fp\: dx -\int vp\: dx +\lambda\: \Vert (\nabla \cdot V)^- \Vert_\infty \: \int vp \: dx
\end{eqnarray}
where we use the fact that $\int_0^ p\beta (r)dr\leq p\beta (p) =vp $.
Thus \eqref{lmst} for $1<q<\infty.$ For the case $q\in\{ 1,\: \infty\},$ we take $H_\sigma (v-k)\in H^1_0(\Omega),$ for a given $k\geq 0$ and $\sigma>0,$ as a test function in \eqref{pstbeta}, where
\begin{equation} \label{Hsigma}
H_\sigma (r) = \left\{ \begin{array}{ll}
1 \quad & \hbox{ if } r\geq 1 \\
r/\sigma & \hbox{ if }\vert r\vert <\sigma \\
-1 &\hbox{ if }r\leq -1 \: . \end{array}\right.
\end{equation}
Then, letting $\sigma \to 0$ and using the fact that $ \nabla p\cdot \nabla H_\sigma (u-k)\geq 0$ a.e. in $\Omega,$ it is not difficult to see that
\begin{equation}
\begin{array}{lll}
\int (v-k)^+ \: dx & \leq& \int (f- k(1+\lambda \: \nabla \cdot V)) \:\sign_0^+(v-k) \: dx+\lambda \lim_{\sigma \to 0}
\int (v-k)\: V\cdot \nabla \hepsp (v-k) \: dx \\
&\leq& \int (f- k(1+\lambda \: \nabla \cdot V)) \:\sign_0^+(v-k) \: dx,
\end{array} \end{equation}
where we use the fact that
$\lim_{\sigma \to 0}
\int (v-k)\: V\cdot \nabla H_\sigma (v-k) \: dx=\lim_{\sigma \to 0} \frac{1}{\sigma}
\int_{0\leq v-k\leq \sigma} (v-k)\: V\cdot \nabla (v-k) \: \: dx= 0.$ Thus
$$ \int (v-k)^+ \: dx \leq \int (f- k(1-\lambda \Vert (\nabla \cdot V)^-)) ^+ \: dx .$$ In particular, taking
\begin{equation}
k=\frac{\Vert f\Vert_\infty}{1-\lambda/\lambda_0},
\end{equation}
we deduce that $v\leq \frac{\Vert f\Vert_\infty}{1-\lambda /\lambda_0}.$ Working in the same way with $H_\sigma (-v+k)$ as a test function, we obtain
$$ v\geq - \frac{\Vert f\Vert_\infty}{1-\lambda/\lambda_0}. $$
Thus the result of the lemma for $q=\infty.$ The case $q=1$ follows by Corollary \ref{ccomp}.
\end{proof}
\begin{lemma}\label{lconveps}
Under the assumption of Theorem \ref{texistm}, by taking a subsequence $\varepsilon\to 0$ if necessary, we have
\begin{equation}\label{weakueps}
v_\varepsilon\to v,\quad \hbox{ in }L^2(\Omega)\hbox{-weak}
\end{equation}
and
\begin{equation}\label{strongpeps}
p_\varepsilon\to v^m,\quad \hbox{ in }H^1_0(\Omega).
\end{equation}
Moreover, $v$ is a weak solution of \eqref{st}.
\end{lemma}
\begin{proof} Using Lemma \ref{lexistreg} as well as Young and Poincar\'e inequalities, we see that the sequences $v_\varepsilon$ and $p_\varepsilon $ are bounded in $L^2(\Omega)$ and $H^1_0(\Omega),$ respectively. So, there exists a subsequence that we denote again by $v_\varepsilon$ and $p_\varepsilon $ such that \eqref{weakueps} is fulfilled and
\begin{equation}\label{weakpeps}
p_\varepsilon\to v^m,\quad \hbox{ in }H^1_0(\Omega)\hbox{-weak}.
\end{equation}
Letting $\varepsilon\to 0$ in \eqref{weakeps}, we obtain that $v$ is a weak solution of \eqref{st}. Let us prove that actually \eqref{weakpeps} holds to be true strongly in $H^1_0(\Omega).$ Indeed, taking $p_\varepsilon$ as a test function, we have
\begin{equation}
\begin{array}{ll}
\lambda \int \vert \nabla p_\varepsilon\vert^2\: dx &= \int (f- v_\varepsilon )\: p_\varepsilon\: dx +\lambda \int V \cdot \nabla \left(\int_0^{p_\varepsilon} \beta_\varepsilon(r)dr\right) \: dx\\
&= \int (f- v_\varepsilon )\: p_\varepsilon\: dx -\lambda \int \nabla \cdot V \: \int_0^{p_\varepsilon} \beta_\varepsilon(r)dr \: dx .
\end{array}
\end{equation}
Since $\int_0^r \beta_\varepsilon(s)\: ds$ converges to $\int_0^r \beta(s)\: ds,$ for any $r\in \RR,$ $p_\varepsilon\to v^m$ a.e. in $\Omega$ and $\left\vert \int_0^{p_\varepsilon} \beta_\varepsilon(s)\: ds\right\vert \leq v_\varepsilon\: p_\varepsilon$ which is bounded in $L^1(\Omega)$ by \eqref{lmsteps}, we have
$$ \int_0^{p_\varepsilon} \beta_\varepsilon(s)\: ds \to \int_0^{p} s^{1/m}\: ds= \frac{m}{m+1} \vert v\vert^{m+1},\quad \hbox{ in } L^1(\Omega). $$
So, in one hand we have
\begin{equation}
\begin{array}{ll} \lim_{\varepsilon\to 0}
\lambda \int \vert \nabla p_\varepsilon\vert^2\: dx &= \int (f- v )\: p\: dx -\lambda \frac{m}{m+1} \int \nabla \cdot V \: \vert v \vert^{m+1} \: dx .
\end{array}
\end{equation}
On the other, since $v$ is a weak solution of \eqref{st}, one sees easily that \begin{equation}
\lambda \int \vert \nabla p\vert^2\: dx = \int (f- v )\: p\: dx -\lambda \frac{m}{m+1} \int \nabla \cdot V \: \vert v\vert^{m+1} \: dx \: ; \end{equation} which implies that $ \lim_{\varepsilon\to 0}
\int \vert \nabla p_\varepsilon\vert^2\: dx= \int \vert \nabla p\vert^2\: dx. $ Combing this with the weak convergence of $\nabla p_\varepsilon,$ we deduce the strong convergence \eqref{strongpeps}.
\end{proof}
\begin{remark}
One sees in the proof that the results of Lemma \ref{lconveps} remain to be true if one replace $f$ in \eqref{pstbeta} by a sequence of $f_\varepsilon\in L^2(\Omega)$ and assumes that, as $\varepsilon\to 0,$
$$f_\varepsilon \to f,\quad \hbox{ in }L^2(\Omega). $$
\end{remark}
\begin{proof}[\textbf{Proof of Theorem \ref{texistm}}]
The proof follows by Lemma \ref{lconveps}. Moreover, the estimates hold to be true by letting $\varepsilon\to 0,$ in the estimate \eqref{lqstateps} and \eqref{lmsteps} for $v_\varepsilon$ and $p_\varepsilon.$
\end{proof}
\subsection{Existence for the evolution problem }
To study the evolution problem, we use Euler-implicit discretization scheme. For any $n\in \NN^*$ being such that $0<\varepsilon:=T/n\leq \varepsilon_0,$ we consider the sequence $(u_i,p_i)_{i=0,...N}$ given by the $\varepsilon-$Euler implicit scheme associated with \eqref{pmef} : \begin{equation} \label{sti}
\left\{ \begin{array}{ll}\left.
\begin{array}{l}
\displaystyle u_{i+1} - \varepsilon \: \Delta p_{i+1} + \varepsilon \: \nabla \cdot (u_{i+1} \: V)=u_{i} +\varepsilon \: f_i \\
\displaystyle p_{i+1} =u_{i+1}^m \end{array}\right\}
\quad & \hbox{ in } \Omega\\ \\
\displaystyle p_{i+1} = 0 & \hbox{ on } \partial \Omega ,\end{array} \right. \quad i=0,1, ... n-1, \end{equation} where, $f_i$ is given by $$f_i = \frac{1}{\varepsilon } \int_{i\varepsilon}^{(i+1)\varepsilon } f(s)\: ds ,\quad \hbox{ a.e. in }\Omega,\quad i=0,... n-1. $$ Now, for a given $\varepsilon-$time discretization $t_i=i\varepsilon,$ $i=0,1,...n,$ we define the $\varepsilon-$approximate solution by \begin{equation}\label{epsapprox}
u_\varepsilon:= \sum_{i=0} ^{n-1 } u_i\chi_{[t_i,t_{i+1})},\quad \hbox{ and } \quad p_\varepsilon:= \sum_{i=1} ^{n -1} p_i\chi_{[t_i,t_{i+1})}. \end{equation}
In order to use the results of the previous section and the general theory of evolution problem governed by accretive operator (see for instance \cite{Benilan,Barbu}), we define the operator $\mathcal{A}_m$ in $L^1(\Omega),$ by $\mu\in \mathcal{A}_m(z)$ if and only if $\mu,\: z\in L^1(\Omega)$ and $z$ is a solution of the problem \begin{equation} \label{eqopm}
\left\{ \begin{array}{ll}
\displaystyle - \Delta z^m + \nabla \cdot (z \: V)=\mu
\quad & \hbox{ in } \Omega\\
\displaystyle z= 0 & \hbox{ on } \partial \Omega ,\end{array} \right. \end{equation} in the sense that $z\in L^2(\Omega),$ $ z^m \in H^1_0(\Omega)$ and \begin{equation}
\displaystyle \int_\Omega \nabla z^m \cdot \nabla\xi -
\int_\Omega z \: V\cdot \nabla \xi = \int_\Omega \mu \: \xi, \quad \forall \: \xi\in H^1_0(\Omega)\cap L^\infty(\Omega). \end{equation}
As a consequence of Theorem \ref{tcompcmef}, we see that the operator $\mathcal{A}_m$ is accretive in $L^1(\Omega)$ ; i.e. $(I+\lambda \mathcal{A}_m)^{-1}$ is a contraction in $L^1(\Omega),$ for small $\lambda>0$ (cf. Appendix section). Moreover, thanks to Theorem \ref{texistm}, we have
\begin{lemma}\label{lAm} For $0<\lambda<\lambda_0 ,$ $\overline{R(I+\lambda \mathcal{A}_m)}=L^1(\Omega)$ and $\overline {\mathcal{D}(\mathcal{A}_m)}=L^1(\Omega).$
\end{lemma} \begin{proof}
Since $R(I+\lambda \mathcal{A}_m)\supseteq L^2(\Omega)$ (by Theorem \ref{texistm}), it is clear that $\overline{R(I+\lambda \mathcal{A}_m)}=L^1(\Omega).$ To prove the density of ${\mathcal{D}(\mathcal{A}_m)}$ in $L^1(\Omega),$ we prove that $L^\infty(\Omega)\subseteq \overline {\mathcal{D}(\mathcal{A}_m)}.$ To this aim, for a given $v\in L^\infty(\Omega),$ we consider the sequence $(v_\varepsilon)_{\varepsilon >0}$ given by the solution of the problem
\begin{equation} \label{st}
\left\{ \begin{array}{ll}
\displaystyle v_\varepsilon -\varepsilon\Delta v_\varepsilon^m + \varepsilon \nabla \cdot (v_\varepsilon \: V)=v
\quad & \hbox{ in } \Omega\\
\displaystyle v_\varepsilon= 0 & \hbox{ on } \partial \Omega.\end{array} \right.
\end{equation} Thanks to \eqref{lmst} and \eqref{lqstat} with $q=\infty,$ one sees easily by letting $\varepsilon-0$ that $v_\varepsilon \to v$ in $L^1(\Omega)$-weak and then $\Vert v\Vert_1 \leq \lim_{\varepsilon\to 0 }\Vert v_\varepsilon\Vert_1.$ Moreover, thanks to \eqref{lqstat} with $q=1,$ we deduce that $\lim_{\varepsilon\to 0 }\Vert v_\varepsilon\Vert_1 = \Vert v\Vert_1.$ Thus $v_\varepsilon \to v$ in $L^1(\Omega)$. \end{proof}
So, thanks to the general theory of nonlinear semi-group governed by accretive operator (see Appendix) , for any $u_0\in L^1(\Omega),$ we have \begin{equation}\label{convueps}
u_\varepsilon \to u,\quad \hbox{ in } \mathcal{C}([0,T),L^1(\Omega)),\quad \hbox{ as }\varepsilon\to 0, \end{equation} where, $u$ is the so called ''mild solution'' of the evolution problem \begin{equation}\label{Cauchypb}
\left\{\begin{array}{ll}
u_t + \mathcal{A}_m u\ni f \quad & \hbox{ in }(0,T)\\ \\
u(0)=u_0.
\end{array} \right. \end{equation} To accomplish the proof of existence (of weak solution) for the problem \eqref{pmef}, we prove that the mild solution $u$ satisfies all the conditions of Definition \ref{defws}. More precisely, we prove the following result.
\begin{proposition}\label{pconv}
Assume $V\in W^{1,2}(\Omega),$ $(\nabla \cdot V)^-\in L^\infty(\Omega) $ and $V$ satisfies the outpointing condition \eqref{HypV1}. For any $u_0\in L^2(\Omega)$ and $f\in L^2(Q),$ the mild solution $u$ of the problem \eqref{Cauchypb} is the unique solution of \eqref{pmef}. \end{proposition}
To prove this result, thanks to \eqref{convueps}, it is enough to study moreover the limit of sequence $p_\varepsilon$ given by the $\varepsilon-$approximate solution.
\begin{lemma} Let $u_\varepsilon$ and $p_\varepsilon$ be the $\varepsilon-$approximate solution given by \eqref{epsapprox}. We have
\begin{enumerate}
\item For any $q\in [1,\infty],$ we have
\begin{equation} \label{lquepsevol}
\Vert u_\varepsilon(t)\Vert_q \leq M_q^\varepsilon,\quad \hbox{ for any }t\geq 0,
\end{equation}
\begin{equation}
M_q^\varepsilon := \left\{ \begin{array}{ll}
\left( \Vert u_0 \Vert_q + \int_0^T \Vert f_\varepsilon (t)\Vert_q \: dt \right) e^{ (q-1)\: T\: \Vert (\nabla \cdot V)^-\Vert_\infty } \quad & \hbox{ if } 1\leq q<\infty \\ \\
\left( \Vert u_0 \Vert_\infty + \int_0^T \Vert f_\varepsilon (t)\Vert_\infty \: dt \right) e^{ T\: \Vert (\nabla \cdot V)^-\Vert_\infty } \quad & \hbox{ if } q=\infty .
\end{array}\right.
\end{equation}
\item For each $\varepsilon >0,$ we have
\begin{equation}\label{lmuepsevol}
\begin{array}{c}
\frac{1}{m+1} \int_\Omega \vert u_\varepsilon(t)\vert ^{m+1} + \int_0^t\!\! \int_\Omega \vert \nabla p_\varepsilon\vert^2 \leq \int_0^t\!\! \int_\Omega f_\varepsilon\:p_\varepsilon \: dx + \int_0^t\!\! \int \left( \nabla \cdot V\right)^- \: p_\varepsilon \: u_\varepsilon \: dx \\ \\ + \frac{1}{m+1} \int_\Omega \vert u_{0}\vert ^{m+1}.
\end{array}
\end{equation}
\end{enumerate} \end{lemma}
\begin{proof}
Thanks to Theorem \ref{texistm}, the sequence $ (u_i)_{i=1,...n}$ of solutions of \eqref{sti} is well defined in $L^2(\Omega) $ and satisfies
\begin{equation}\label{stwsi}
\displaystyle \int_\Omega u_{i+1} \:\xi+ \varepsilon \: \int_\Omega \nabla p_{i+1} \cdot \nabla\xi - \varepsilon \:
\int_\Omega u_{i+1} \: V\cdot \nabla \xi = \int_{i\varepsilon }^{(i+1)\varepsilon } \int_\Omega f_i\: \xi,\quad \hbox{ for }i=1,...,n-1,
\end{equation} for any $\xi\in H^1_0(\Omega).$ Thanks to \eqref{lqstat}, for any $1\leq q\leq \infty,$ we have \begin{eqnarray}
\Vert u_i\Vert_q &\leq& \Vert u_{i-1}\Vert_q + \varepsilon\: \Vert f_{i}\Vert_q + \varepsilon\: (q-1) \: \Vert (\nabla \cdot V)^-\Vert_\infty \Vert u_{i}\Vert_q .
\end{eqnarray} By induction, this implies that, for any $t\in [0,T),$ we have
\begin{equation}
\Vert u_\varepsilon(t)\Vert_q \leq \Vert u_0 \Vert_q + \int_0^T \Vert f_\varepsilon(t)\Vert_q \: dt + (q-1) \: \Vert (\nabla \cdot V)^-\Vert_\infty \int_0^T \Vert u_\varepsilon(t)\Vert_q\: dt .
\end{equation} Using Gronwall Lemma, we deduce \eqref{lquepsevol}, for any $1\leq q< \infty.$ The proof for the case $q=\infty$ follows in the same way by using \eqref{lqstat} with $q=\infty.$
Now, using the fact that
\begin{equation}
\left(u_i-u_{i-1} \right) p_i = \left(u_i-u_{i-1} \right) u_i^m \geq \frac{1}{m+1}\left (u_i^{m+1} -u_{i-1}^{m+1}\right)
\end{equation} and \begin{equation}
\int u_i \: V \cdot \nabla p_i \leq \int \int\left(\nabla \cdot V\right)^- p_i\: u_i , \end{equation} we get
\begin{eqnarray}
\frac{1}{m+1} \int_\Omega \vert u_i\vert ^{m+1} + \varepsilon \int_\Omega \vert \nabla p_i\vert^2 &\leq& \varepsilon \int_\Omega f_i\:p_i \: dx + \varepsilon \: \int \left( \nabla \cdot V\right)^- \: p_i \: u_i \: dx \\ & & + \frac{1}{m+1} \int_\Omega \vert u_{i-1}\vert ^{m+1}.
\end{eqnarray}
Summing this identity for $i=1,....,$ and using the definition of $u_\varepsilon$, $p_\varepsilon$ and $f_\varepsilon,$ we get \eqref{lmuepsevol}. \end{proof}
\begin{proof}[\textbf{Proof of Proposition \ref{pconv}}] Recall that we already know that $u_\varepsilon\to u$ in $\mathcal{C}([0,T);L^1(\Omega),$ as $\varepsilon\to 0.$ Now, combining \eqref{lquepsevol} and \eqref{lmuepsevol} with Poincaré and Young inequalities, one sees that \begin{equation}
\frac{1}{m+1} \frac{d}{dt} \int_\Omega \vert u_\varepsilon\vert ^{m+1} \: dx+ \int_\Omega \vert \nabla p_\varepsilon\vert^2\: dx \leq C (N,\Omega) \left( \int_\Omega \vert f_\varepsilon\vert^2\: dx + \Vert (\nabla \cdot V)^- \Vert_\infty\: (M_2^\varepsilon)^2 \right ),\quad \hbox{ in }\mathcal{D}'(0,T). \end{equation}
This implies that $p_\varepsilon$ is bounded in $L^2(0,T;H^1_0(\Omega)).$ This implies that
\begin{equation}\label{convpeps}
p_\varepsilon \to u^m,\quad \hbox{ in }L^2(0,T;H^1_0(\Omega))-\hbox{weak},\quad \hbox{ as }\varepsilon\to 0 .
\end{equation}
Recall that taking
$$\tilde u_\varepsilon(t) =\frac{(t-t_i)u_{i+1} - (t-t_{i+1})u_i}{\varepsilon} , \quad \hbox{ for any }t\in [t_i,t_{i+1}),\: i=1,...n,$$
we have \begin{equation}\label{uepstilde}
\partial_t\tilde u_\varepsilon -\Delta p_\varepsilon +\nabla \cdot(u_\varepsilon\: V)=f_\varepsilon,\quad \hbox{ in }\mathcal{D}'(Q). \end{equation} Moreover, we know that $\tilde u_\varepsilon \to u,$ in $\mathcal{C}([0,T),L^1(\Omega)).$ So letting $\varepsilon\to 0 $ in \eqref{uepstilde}, we deduce that $u$ is a solution of \eqref{pmef}. Letting $\varepsilon \to 0$ in \eqref{lquepsevol} and \eqref{lmuepsevol}, we get respectively \eqref{lquevol} and \eqref{lmuevol}. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{texistevolm}}] The proof follows by Proposition \ref{pconv}.
\end{proof}
\section{The limit as $m\to\infty.$} \setcounter{equation}{0}
Since the solution of the problem \eqref{pmef} is the mild solution associated with the operator $\mathcal{A}_m,$ we begin by studying the $L^1-$ limit, as $m\to\infty,$ of the solution of the stationary problem \eqref{st}. Formally, this limiting problem is given by \begin{equation} \label{sths}
\left\{ \begin{array}{ll}\left.
\begin{array}{l}
\displaystyle v - \Delta p + \nabla \cdot (v \: V)=f \\
\displaystyle v \in \hbox{Sign}(p)\end{array}\right\}
\quad & \hbox{ in } \Omega\\ \\
\displaystyle p= 0 & \hbox{ on } \partial \Omega .\end{array} \right. \end{equation} This is the stationary problem associated with the so called Hele-Shaw problem. Thanks to \cite{Igshaw}, for any $f\in L^2(\Omega),$ \eqref{sths} has a unique solution $(v,p)$ in the sense that $(v ,p) \in L^\infty(\Omega) \times H^1_0(\Omega)$, $\displaystyle v \in \sign(p)$ a.e. in $\Omega,$ and \begin{equation}\label{stwshs}
\displaystyle \int_\Omega v \:\xi+ \int_\Omega \nabla p \cdot \nabla\xi -
\int_\Omega v \: V\cdot \nabla \xi = \int_\Omega f\: \xi, \quad \hbox{ for any } \xi\in H^1_0(\Omega). \end{equation}
To begin with, we prove the following incompressible limit for stationary problem.
\begin{proposition}\label{tconvumWst}
Under the assumptions of Theorem \ref{texistm}, let $v_m$ be the solution of \eqref{st}. As $m\to\infty,$ we have
\begin{equation}\label{convum}
v_m\to v,\quad \hbox{ in } L^2(\Omega)\hbox{-weak},
\end{equation}
\begin{equation}\label{convpm}
v_m^m\to p,\quad \hbox{ in } H^1_0(\Omega),
\end{equation}
and $(v,p)$ is the solution of \eqref{sths}. \end{proposition}
\begin{proof} Thanks to \eqref{lqstat}, there exists $v\in H^1_0(\Omega),$ such that \eqref{convum} is fulfilled. Thanks to \eqref{lmst}, we see that the sequence $p_m$ is bounded in $H^1_0(\Omega) ,$ which implies that, by taking a sub-sequence if necessary,
\begin{equation}\label{weakpm}
v_m^m\to p,\quad \hbox{ in } H^1_0(\Omega)\hbox{-weak}
\end{equation} and \begin{equation}\label{weakpm}
v_m^m\to p,\quad \hbox{ in } L^2(\Omega). \end{equation}
Using monotonicity arguments (see for instance Proposition 2.5 of \cite{Br}) we get $ v \in \sign(p)$ a.e. in $\Omega, $ and letting $m\to\infty$ in \eqref{weakeps}, we obtain that $(v,p)$ satisfies \eqref{stwshs}. To prove the strong convergence of $p_m,$ we use the same argument of the proof of Lemma \ref{lconveps}. Indeed, taking $p_m$ as a test function in \eqref{stwf}, we have
\begin{eqnarray}
\lambda \int \vert \nabla p_m\vert^2\: dx &=& \int (f- v_m )\: p_m\: dx +\lambda \int \nabla \cdot V \: \left(\int_0^{p_m} r^\frac{1}{m}dr\right) \: dx\\
&=& \int (f- v_m )\: p_m\: dx +\lambda \frac{m}{m+1} \int \nabla \cdot V \: v_m\: p_m\: dx .
\end{eqnarray}
Letting $m\to \infty,$ and using \eqref{convpm} and \eqref{weakpm}, we see that
\begin{eqnarray}
\lim_{m\to \infty}
\lambda \int \vert \nabla p_m\vert^2\: dx &=& \int (f- u )\: p\: dx + \lambda \int u\: p\: \nabla \cdot V \: dx \\
&=& \int (f- u )\: p\: dx + \lambda \int \nabla \cdot V \: \vert p\vert \: dx .
\end{eqnarray} We know that $(u,p)$ is a solution of \eqref{sths}, so one sees easily that \begin{equation}
\lambda \int \vert \nabla p\vert^2\: dx = \int (f- u )\: p\: dx + \lambda \int \nabla \cdot V \: \vert p\vert , \end{equation} so that $$ \lim_{m\to \infty} \lambda \int \vert \nabla p_m\vert^2\: dx = \lambda \int \vert \nabla p\vert^2\: dx.$$
Thus the strong convergence of $\nabla p_m$.
\end{proof}
For the strong convergence of $v_m,$ under the assumption \eqref{HypsupportV}, we prove first the following convergence for the solution of the stationary problem.
\begin{theorem}\label{tconvumSst}
Under the assumptions of Theorem \ref{tbvm} ; i.e. $V\in W^{1,2}(\Omega)$, $\nabla \cdot V\in L^\infty(\Omega) $ and satisfies \eqref{HypsupportV}, for any $0<\lambda < \lambda_1,$ the convergence \eqref{convum} holds to be true strongly in $L^1(\Omega)$. Here
\begin{equation}
\lambda_1 := 1/\sum_{i,k} \Vert \partial_{x_i} V_k \Vert_\infty.
\end{equation}
\end{theorem}
\begin{corollary}\label{cconvam} Under the assumptions of Theorem \ref{tconvumSst}, the operator $\mathcal{A}_m$ converges to $\mathcal{A}$ in the sense of resolvent in $L^1(\Omega),$ where $\mathcal{A}$ is defined by : $\mu\in \mathcal{A}(z)$ if and only if $\mu,\: z\in L^1(\Omega)$ and $z$ is a solution of the problem
\begin{equation} \label{eqopinfty}
\left\{ \begin{array}{ll}
\displaystyle - \Delta p + \nabla \cdot (z \: V)=\mu
\quad & \hbox{ in } \Omega\\ \\
z\in \sign(p)\\ \\
\displaystyle p= 0 & \hbox{ on } \partial \Omega ,\end{array} \right.
\end{equation}
in the sense that $z\in L^\infty(\Omega),$ $\exists \: p\in H^1_0(\Omega)$ such that $ p \in H^1_0(\Omega),$ $u\in \sign(p)$ a.e. in $\Omega$
and
\begin{equation}
\displaystyle \int_\Omega \nabla p \cdot \nabla\xi -
\int_\Omega z \: V\cdot \nabla \xi = \int_\Omega \mu \: \xi, \quad \forall \: \xi\in H^1_0(\Omega)\cap L^\infty(\Omega).
\end{equation} Moreover, we have $$\overline{\mathcal{D}(\mathcal{A})} =\Big\{ z\in L^\infty(\Omega)\: :\: \vert z\vert \leq 1\hbox{ a.e. in }\Omega \Big\}. $$ \end{corollary}
The main element to prove Theorem \ref{tconvumSst} is $BV_{loc}$-estimates on $v_m.$ Recall that a given function $u\in L^1(\Omega)$ is said to be of bounded variation if and only if, for each $i=1,...N,$ \begin{equation}
TV_i(u,\Omega) := \sup\left\{ \int_\Omega u\: \partial_{x_i} \xi \: dx \: :\: \xi\in \mathcal C^1_c(\Omega) \hbox{ and }\Vert \xi\Vert_\infty \leq 1\right\} <\infty, \end{equation} here $\mathcal{C}^1_c(\Omega)$ denotes the set of $\mathcal{C}^1-$function compactly supported in $\Omega.$ More generally a function is locally of bounded variation in a domain $\Omega$ if and only if for any open set $\omega\subset\! \subset \Omega,$ $ TV_i(u,\omega)<\infty$ for any $i=1,...,N.$ In general a function locally of bounded variation (as well as function of bounded variation) in $\Omega,$ may not be differentiable, but by the Riesz representation theorem, their partial derivatives in the sense of distributions are Borel measure in $\Omega.$ This gives rise to the definition of the vector space of functions of bounded variation in $\Omega$, usually denoted by $BV(\Omega),$ as the set of $u\in L^1(\Omega)$ for which there are Radon measures $\mu_1,...,\mu_N$ with finite total mass in $\Omega$ such that \begin{equation}
\int_\Omega v\: \partial_{x_i} \xi \: dx =-\int_\Omega \xi\: d\mu_i,\quad \hbox{ for any }\xi\in \mathcal C_{c}(\Omega),\quad \hbox{ for }i=1,...,N. \end{equation} Without abuse of notation we continue to point out the measures $\mu_i$ by $\partial_{x_i}v$ anyway, and by $\vert \partial_{x_i}v\vert $ the total variation of $\mu_i.$ Moreover, we'll use as usual $Dv=(\partial_{x_1}v,...,\partial_{x_N}v)$ the vector valued Radon measure pointing out the gradient of any function $v\in BV(\Omega),$ and $\vert Dv\vert$ indicates the total variation measure of $v.$ In particular, for any open set $\omega\subset\! \subset \Omega,$ $TV_i(v,\omega)=\vert \partial_{x_i}v\vert(\omega)<\infty,$ and the total variation of the function $v$ in $\omega$ is finite too ; i.e. \begin{equation}
\Vert Dv\Vert(\omega)= \sup\left\{ \int_\Omega v\: \nabla \xi \: dx \: :\: \xi\in \mathcal C^1_c(\omega) \hbox{ and }\Vert \xi\Vert_\infty \leq 1\right\} <\infty. \end{equation} At last, let us remind the reader here the well known compactness result for functions of bounded variation : given a sequence $u_n$ of functions in $BV_{loc}(\Omega)$ such that, for any open set $\omega\subset\! \subset \Omega,$ we have \begin{equation}
\sup_n\left\{ \int_\omega \vert v_n\vert\: dx + \vert Dv_n\vert (\omega) \right\} <\infty, \end{equation} there exists a subsequence that we denote again by $v_n$ which converges in $L^1_{loc}(\Omega)$ to a function $v\in BV_{loc}(\Omega).$ Moreover, for any compactly supported continuous function $0\leq \xi$, the limit $u$ satisfies \begin{equation}
\int \xi\: \vert \partial_{x_i}v \vert \leq \liminf_{n\to\infty } \int \xi\: \vert \partial_{x_i}v_n \vert , \end{equation} for any $i=1,...N,$ and \begin{equation}
\int \xi\: \vert Dv\vert \leq \liminf_{n\to\infty } \int \xi\: \vert Dv_n\vert. \end{equation}
Under the assumption \eqref{HypsupportV}, the following sequence of test functions plays an important role in the proof of $BV_{loc}$-estimates and convergence results of Theorem \ref{treglimum}.
\begin{lemma}\label{lomegah}
Under the assumption \eqref{HypsupportV}, there exists $0\leq \omega_h\in \mathcal H^2(\Omega_h)$ compactly supported in $\Omega,$ such that $\omega_h \equiv 1$ in $ \Omega_h$ and
\begin{equation} \label{HypsupportV2}
\ \int_{\Omega\setminus \Omega_h} \varphi\: V\cdot \nabla \omega_h \: dx \geq 0,\quad \hbox{ for any }0\leq \varphi\in L^2(\Omega),
\end{equation} \end{lemma} \begin{proof} It is enough to take $\omega_h(x)=\eta_h(d(.,\partial \Omega)),$ for any $x\in \Omega,$ where $\eta_h\: :\: [0,\infty)\to \RR^+$ is a nondecreasing $\mathcal{C}^2$-function compactly supported in $(0,\infty)$ such that $\eta_h \equiv 1$ in $[ h,\infty).$ In this case,
$$\nabla \omega_h =\eta_h'(d(.,\partial \Omega))\: \nabla d(.,\partial \Omega), $$
so that
\begin{eqnarray*}
\ \int_{\Omega\setminus \Omega_h} \varphi\: V\cdot \nabla \omega_h \: dx &=& - \int_{\Omega\setminus \Omega_h} \eta_h'(d(.,\partial \Omega))\: \varphi\: V\cdot \nabla d(.,\partial \Omega) \: dx \\ \\
&=& \geq 0,\quad \hbox{ for any }0\leq \varphi\in L^2(\Omega),
\end{eqnarray*}
This function may be define
\begin{equation}
\eta_h(r)= \left\{ \begin{array}{ll}
0\quad &\hbox{ if } 0\leq r\leq c_1h\\ \\
e^{\frac{-C_h}{r^2-c_1^2h^2 }}\quad &\hbox{ if } c_1 h\leq r\leq c_2h\\ \\
1- e^{\frac{-C_h}{h^2 -r^2}} \quad &\hbox{ if } c_2h\leq r\leq h\\ \\
1\quad &\hbox{ if } h \leq r \end{array} \right.
\end{equation}
with $0<c_1<c_2<1$ and $C_h>0$ given such that $2c_2^2-c_1^2=1$ and $e^{\frac{-C_h}{M_h}} = 1-e^{\frac{-C_h}{M_h}},$
where $M_h:= (c_2^2-c_1^2)h^2 = (1-c_2^2)h^2.$ For instance one can take $c_1=1/2$ and $c_2= \sqrt{5}/(2\sqrt{2}).$ See that
\begin{equation}
\eta_h'(r)= \left\{ \begin{array}{ll}
0\quad &\hbox{ if } 0\leq r\leq c_1h\\ \\
\frac{2rC_h}{(r^2-c_1^2h^2)^2}e^{\frac{-C_h}{r^2-c_1^2h^2 }}\quad &\hbox{ if } c_1 h\leq r\leq c_2h\\ \\
\frac{2rC_h}{(h^2-r^2)^2} e^{\frac{-C_h}{h^2 -r^2}} \quad &\hbox{ if } c_2h\leq r\leq h\\ \\
0\quad &\hbox{ if } h \leq r \end{array} \right.
\end{equation} is continuous and derivable at least on $\RR\setminus \{ c_2h\}.$ Thus $\eta_h\in H^2(\Omega).$
\end{proof}
We have
\begin{theorem} \label{tbvm}
Assume $f\in BV_{loc}(\Omega)\cap L^2(\Omega)$, $V\in W^{1,\infty}(\Omega)^N,$ $\nabla \cdot V\in W^{1,2 }_{loc}(\Omega) $ and let $v_m$ be the solution of \eqref{st}. Then, for any $0<\lambda <1/\lambda_1$, $v_m\in BV_{loc}(\Omega)$ and we have
\begin{equation} \label{bvstat}
\begin{array}{c}
(1-\lambda \lambda_1 ) \sum_{i=1}^N \int \omega_h \: d\: \vert \partial_{x_i}v\vert \leq \lambda \sum_{i=1}^N \int (\Delta \omega_h)^+ \: \vert \partial_{x_i} p \vert \: dx + \sum_{i=1}^N \int \omega_h \: d\: \vert \partial_{x_i} f\vert \\ + \lambda \sum_{i=1}^N \int \omega_h \: \vert v\vert \: \vert \partial_{x_i} ( \nabla \cdot V )\vert \: dx ,
\end{array}
\end{equation} where $\omega_h$ is given by Lemma \ref{lomegah}. \end{theorem}
To prove this result we use again the regularized problem \eqref{pstbeta} and we let $\varepsilon\to 0.$ To begin with, we prove first the following lemma concerning any weak solutions of the general problem \begin{equation}\label{eqalpha}
\displaystyle v - \Delta \beta^{-1} (v)+ \nabla \cdot (v \: V)=f \hbox{ in } \Omega, \end{equation} $\beta$ is a given a nondecreasing function assumed to be regular (at least $\mathcal{C}^2$).
\begin{lemma} \label{lbvreg} Assume $f\in W^{1,2}_{loc}(\Omega)$, $V\in W^{1,2}_{loc}(\Omega)^N$, $\nabla \cdot V\in W^{1,\infty }_{loc}(\Omega) $ and $v\in H^1_{loc}(\Omega)\cap L^\infty_{loc}(\Omega)$ satisfy \eqref{eqalpha} in $\mathcal{D}'(\Omega).$ Then, for each $i=1,..N,$ we have
\begin{equation}\label{bvestreg}
\begin{array}{c}
\vert \partial_{x_i} v \vert -
\sum_{k=1}^N \vert \partial_{x_k} v\vert \: \sum_{k=1}^N \vert \partial_{x_i} V_k \vert - \Delta \vert \partial_{x_i} \beta^{-1} (v) \vert + \nabla \cdot( \vert \partial_{x_i} v\vert \: V) \\ \leq \vert \partial_{x_i} f \vert
+ \vert v\vert \:\vert \partial_{x_i}(\nabla \cdot V) \vert \quad \hbox{ in } \mathcal{D}'(\Omega).
\end{array}
\end{equation} \end{lemma}
\begin{proof} Set $p:=\beta^{-1}(v).$ Thanks to \eqref{eqalpha} and the regularity of $f$ and $V,$ it is not difficult to see that $v,\: p\in H^2_{loc}(\Omega)\cap L^\infty_{loc}(\Omega),$ and for each $i=1,...N,$ the partial derivatives $\partial_{x_i}v$ and $\partial_{x_i} p$ satisfy the following equation
\begin{equation}\label{bvint1}
\partial_{x_i} v - \Delta \partial_{x_i} p + \nabla \cdot(\partial_{x_i}v\: V )= \partial_{x_i} f - ( \nabla v\cdot \partial_{x_i} V + v\: \partial_{x_i} (\nabla \cdot V)),\quad \hbox{ in } \mathcal{D}'(\Omega).
\end{equation}
By density, we can take $\xi H_\sigma(\partial_{x_i}v) $ as a test function in \eqref{bvint1} where $\xi \in H^2(\Omega)$ is compactly supported in $\Omega$ and $H_\sigma$ is given by \eqref{Hsigma}. We obtain
\begin{equation}\label{bvint2}
\begin{array}{cc} \int \Big( \partial_{x_i} v \: \xi H_\sigma(\partial_{x_i}v) +\nabla \partial_{x_i} p \cdot \nabla (\xi H_\sigma(\partial_{x_i}v) \Big) \: dx - \int \partial_{x_i} v\: V \cdot \nabla (\xi H_\sigma (\partial_{x_i}v)) \: dx \\ \\ = \int \partial_{x_i} f\: \xi H_\sigma (\partial_{x_i}v) \: dx - \int ( \nabla v\cdot \partial_{x_i} V + v\: \partial_{x_i} (\nabla \cdot V) ) \: \xi H_\sigma (\partial_{x_i}v)\: dx .
\end{array} \end{equation}
To pass to the limit as $\sigma\to 0,$ we see first that
\begin{equation} \label{triv}
H_\sigma '(\partial_{x_i}v)\: \partial_{x_i}v= \frac{1}{\sigma} \: \partial_{x_i}v \: \chi_{[\vert \partial_{x_i} v_\varepsilon\vert \leq \sigma ]}\: \to 0 ,\quad \hbox{ in }L^\infty(\Omega)\hbox{-weak}^*.
\end{equation}
So, the last term of the first part of \eqref{bvint2} satisfies
\begin{eqnarray}
\lim_{\sigma \to 0} \int \partial_{x_i} v\: V \cdot \nabla (\xi H_\sigma (\partial_{x_i}v)) \: dx &=& \int \vert \partial_{x_i} v\vert \: V \cdot \nabla \xi \: dx + \lim_{\sigma \to 0} \int \partial_{x_i}v \: \nabla \partial_{x_i}v \cdot V H_\sigma '(\partial_{x_i}v) \: \xi \: dx \\
&=& \int \vert \partial_{x_i} v\vert \: V \cdot \nabla \xi,
\end{eqnarray}
On the other hand, we see that
\begin{eqnarray}
\int \nabla \partial_{x_i} p \cdot \nabla (\xi H_\sigma (\partial_{x_i}v) ) \: dx &=&\int H_\sigma (\partial_{x_i}v) \nabla \partial_{x_i} p \cdot \nabla \xi \: dx + \int \xi\: \nabla \partial_{x_i} p \cdot \nabla H_\sigma (\partial_{x_i}v) \: dx .
\end{eqnarray}
Since $\sign_0 (\partial_{x_i}v)=\sign_0 (\partial_{x_i} p),$ the first term satisfies
\begin{equation}
\lim_{\sigma\to 0} \int H_\sigma (\partial_{x_i}v) \nabla \partial_{x_i} p \cdot \nabla \xi \: dx =- \int \vert \partial_{x_i} p\vert \: \Delta \xi \: dx.
\end{equation}
As to the second term, we have
\begin{eqnarray}
\lim_{\sigma\to 0} \int \xi\: \nabla \partial_{x_i} p \cdot \nabla H_\sigma (\partial_{x_i}v) \: dx &=& \lim_{\sigma\to 0} \int \xi\: H_\sigma '(\partial_{x_i}v)\: \nabla \partial_{x_i} p \cdot \nabla \partial_{x_i}v \: dx \\
&=& \lim_{\sigma\to 0} \int \xi\: H_\sigma '(\partial_{x_i}v)\: \nabla (\beta'(v)\partial_{x_i}v) \cdot \nabla \partial_{x_i}v \: dx\\
&=& \lim_{\sigma\to 0} \int \xi\: H_\sigma '(\partial_{x_i}v)\: \beta'(v) \: \Vert \nabla \partial_{x_i}v\Vert^2 \: dx \\ & & + \lim_{\sigma\to 0} \int \xi\: H_\sigma '(\partial_{x_i}v)\: \partial_{x_i}v \: \beta''(v) \nabla v \cdot \nabla \partial_{x_i}v \: dx \\
&\geq & \lim_{\sigma\to 0} \int \xi\: H_\sigma '(\partial_{x_i}v)\: \partial_{x_i}v \: \beta''(v) \nabla v \cdot \nabla \partial_{x_i}v \: dx \\ &\geq &0,
\end{eqnarray}
where we use again \eqref{triv}.
So, letting $\sigma\to 0$ in \eqref{bvint2} and using again the fact that $\sign_0 (\partial_{x_i}v)=\sign_0 (\partial_{x_i} p),$
we get
$$ \begin{array}{c}
\vert \partial_{x_i} v \vert - \Delta \vert \partial_{x_i} p \vert + \nabla \cdot( \vert \partial_{x_i} v\vert \: V) \leq \sign_0 (\partial_{x_i} v) \partial_{x_i} f - ( \nabla v\cdot \partial_{x_i} V \\ \\ + v\: \partial_{x_i}(\nabla \cdot V) ) \: \sign_0 (\partial_{x_i}v) \quad \hbox{ in } \mathcal{D}'(\Omega)
\end{array} $$
At last, using the fact that
$$\vert \nabla v\cdot \partial_{x_i} V\vert \leq \sum_{k} \vert \partial_{x_k} v\vert \: \sum_{k} \vert \partial_{x_i} V_k \vert ,$$
the result of the lemma follows. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{tbvm}}] Under the assumptions of Theorem \ref{tbvm}, for any $\epsilon >0,$ let us consider $f_\varepsilon$ a regularization of $f$ satisfying $f_\varepsilon \to f$ in $L^1(\Omega)$ and
\begin{equation}
\int\xi\: \vert \partial_{x_i} f_\varepsilon\vert \: dx \to \int\xi\: d\: \vert \partial_{x_i} f\vert , \quad \hbox{ for any } \xi\in \mathcal{C}_c(\Omega) \hbox{ and } i=1,...N.
\end{equation}
Thanks to Lemma \ref{lexistreg}, we consider $v_\varepsilon$ be the solution of the problem \eqref{st}, where we replace $f$ by the regularization $f_\varepsilon$. Applying Lemma \ref{lbvreg} by replacing $V$ by $\lambda V$ and $\beta^{-1}$ by $\lambda\: \beta_\varepsilon^{-1}$, we obtain
\begin{equation}
\begin{array}{c}
\int \vert \partial_{x_i}v_\varepsilon \vert \: \xi\: dx - \lambda \int \sum_{k=1}^N \vert \partial_{x_i} V_k \vert \: \sum_{k=1}^N \vert \partial_{x_k}v_\varepsilon\vert \: \xi\: dx
\leq \lambda \sum_{k=1}^N \int \vert \partial_{x_i} p_\varepsilon \vert \: (\Delta \xi)^+\: dx +
\int \vert \partial_{x_i} f_\varepsilon \vert \: \xi \: dx \\ + \lambda \int \vert
v_\varepsilon\vert \: \vert \partial_{x_i} ( \nabla \cdot V ) \vert \: \xi \: dx - \lambda \: \int \vert \partial_{x_i}v_\varepsilon \vert\: V\cdot \nabla \: \xi\: dx ,\quad \hbox{ for any }i=1,...N \hbox{ and }0\leq \xi\in \mathcal{D}(\Omega).
\end{array}
\end{equation} By density, we can take $\xi=\omega_h$ as given by Lemma \ref{lomegah}, so that the last term is nonnegative, and we have
\begin{equation}
\begin{array}{c}
\int \vert \partial_{x_i}v \vert \: \omega_h\: dx - \lambda \: \sum_{k} \vert \partial_{x_i} V_k \vert \: \int \sum_{k} \vert \partial_{x_k}v\vert \: \omega_h\: dx \leq \lambda \sum_{k} \int \vert \partial_{x_i} p \vert \: (\Delta \omega_h)^+\: dx \\ +
\int \vert \partial_{x_i} f \vert \: \omega_h \: dx + \lambda \int \vert
v\: \vert \: \vert \partial_{x_i} ( \nabla \cdot V ) \vert \: \xi_h \: dx ,\quad \hbox{ for any }i=1,...N.
\end{array}
\end{equation}
Summing up, for $i=1,...N,$ and using the definition of $\lambda_1,$ we deduce that
\begin{equation}
\begin{array}{c}
\sum_{i} \int \vert \partial_{x_i} v_\varepsilon \vert \: \omega_h\: dx - \lambda \lambda_1 \sum_{k} \int \vert \partial_{x_k}v_\varepsilon\vert \: \xi_h\: dx \leq \lambda \sum_{i} \int (\Delta \omega_h)^+\: \vert \partial_{x_i} p_\varepsilon \vert \: dx \\ +
\int\sum_{i} \vert \partial_{x_i} f_\varepsilon \vert \: \omega_h \: dx + \lambda \: \int \vert v_\varepsilon\vert \sum_{i} \vert \partial_{x_i} (\nabla \cdot V ) \vert \: \omega_h \: dx ,
\end{array}
\end{equation}
and then the corresponding property \eqref{bvstat} follows for $v_\varepsilon.$
Thanks to \eqref{lqstat} and \eqref{lmst}, we know that $v_\varepsilon$ and $\partial_{x_i}p_\varepsilon$ are bounded in $L^2(\Omega).$ This implies that, for any $\omega\subset\!\subset \Omega,$ $\sum_{i} \int_\omega \vert \partial_{x_i} v_\varepsilon \vert \: dx$ is bounded. So, $v_\varepsilon$ is bounded in $BV_{loc}(\Omega).$ Combining this with the $L^1-$bound \eqref{lqstat}, it implies in particular, taking a subsequence if necessary, the convergence in \eqref{weakueps} holds to be true also in $L^1(\Omega)$ and then $v\in BV_{loc}(\Omega).$ At last, letting $\varepsilon\to\infty$ in \eqref{bvestreg} and, using moreover \eqref{strongpeps} and the lower semi-continuity of variation measures $\vert \partial_{x_i} v_\varepsilon\vert$, we deduce
\eqref{bvstat} for the limit $v,$ which is the solution of the problem \eqref{st} by Lemma \ref{lconveps}. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{tconvumSst}}]
Recall that under the assumptions of the theorem, the $BV_{loc}$ estimate \eqref{bvstat} is fulfilled for $v_m.$ Since the constant $C$ in \eqref{lmst} does not depend on $m,$ this implies that $v_m$ is bounded in $BV(\omega).$ Since $\omega$ is arbitrary, we deduce in particular that the convergence in \eqref{convum} holds to be true also in $L^1(\Omega)$, $v\in BV_{loc}(\Omega),$ and
\eqref{bvstat} is fulfilled. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{treglimum}}] Thanks to Corollary \ref{cconvam} and Theorem \ref{trcontinuity} , we have \begin{equation}
u_m\to u,\quad \hbox{ in } \mathcal{C}([0,T);L^1(\Omega)). \end{equation} On the other hand, thanks to \eqref{lqstat} and \eqref{lmst}, it is clear that $p_m$ is bounded in $L^2(0,T;H^1_0(\Omega)).$ So, there exists $p\in L^2(0,T;H^1_0(\Omega)),$ such that, taking a subsequence if necessary, we have \begin{equation}
u_m^m\to p,\quad \hbox{ in }L^2(0,T;H^1_0(\Omega))-\hbox{weak}. \end{equation} Then using monotonicity arguments we have $u\in \sign(p)$ a.e. in $Q,$ and letting $m\to\infty,$ in the weak formulation we deduce that the couple $(u,p)$ satisfies \eqref{weakformhs}. Thus the results of the theorem.
\end{proof}
\begin{remark}\label{Rbvcond}
See that we use the condition \eqref{HypsupportV} for the proof of $BV_{loc}$-estimate through $w_h$ we introduce in Lemma \ref{lomegah}. Indeed, $BV_{loc}$ estimates follows from Lemma \ref{lbvreg} once the term $ \lambda \: \int \vert \partial_{x_i}v_\varepsilon \vert\: V\cdot \nabla \: \xi\: dx$ nonegative.
Clearly, the construction of $\omega_h$ by using $d(.,\partial \Omega)$ is basically connected to the condition \eqref{HypsupportV}. Otherwise, this condition could be replaced definitely by the existence of $0\leq \omega_h\in \mathcal H^2(\Omega_h)$ compactly supported in $\Omega,$ such that $\omega_h \equiv 1$ in $ \Omega_h$ and
\begin{equation} \label{HypsupportV2}
\ \int_{\Omega\setminus \Omega_h} \varphi\: V\cdot \nabla \omega_h \: dx \geq 0,\quad \hbox{ for any }0\leq \varphi\in L^2(\Omega).
\end{equation}
\end{remark}
\section{Reaction-diffusion case}\label{Sreaction}
Let us consider now the reaction-diffusion porous medium equation with linear drift \begin{equation} \label{pmeg}
\left\{ \begin{array}{ll}
\displaystyle \frac{\partial u }{\partial t} -\Delta u^m +\nabla \cdot (u \: V)=g(.,u) \quad & \hbox{ in } Q \ \\
\displaystyle u= 0 & \hbox{ on }\Sigma \\ \\
\displaystyle u (0)=u _0 &\hbox{ in }\Omega,\end{array} \right. \end{equation}
Thanks to Theorem \ref{abstractRevol} and Theorem \ref{trcontinuity}, we assume that $g\: :\: Q\times\RR\rightarrow\RR$ is a Carathéodory application ; i.e. continuous in $r\in\RR$ and measurable in $(t,x)\in Q$, and satisfies moreover the following assumptions :
\begin{itemize}
\item [($\mathcal G_1)$] $g(.,r)\in L^2(Q) $ for any $r\in \RR.$
\item [($\mathcal G_2)$] There exists $0\leq \theta \in \mathcal{C}(\RR) ,$ such that
$$\frac{\partial g}{\partial r}(t,x,.) \leq \theta ,\quad \hbox{ in }\mathcal{D}'(\RR),\hbox{ for a.e. }(t,x)\in Q.$$
\item [($\mathcal G_3)$] There exists $\omega_1,\: \omega_2 \in W^{1,\infty}(0,T)$ such that $w_1(0)\leq u_0\leq w_2(0)$ a.e. in $\Omega$ and, for any $t\in (0,T),$
\begin{equation}\label{inf}
\dot \omega_1(t)+ \omega_1(t)\nabla \cdot V \leq g(t,.,\omega_1(t))\quad \hbox{ a.e. in }\Omega
\end{equation}
and
\begin{equation}\label{sup}
\dot \omega_2(t)+ \omega_2(t)\nabla \cdot V \geq g(t,.,\omega_2(t))\quad \hbox{ a.e. in }\Omega.
\end{equation} \end{itemize}
\begin{remark}\label{remg} \begin{enumerate}
\item On sees that $(G2)$ implies that, $g(.,u)\in L^1(Q),$ for any $u\in L^\infty(Q).$ Indeed, setting $M=\int_0^{\Vert u\Vert_\infty}\theta (r)\: dr,$ we have
\begin{equation}\label{explicitcondg}
- g^-(.,M) - 2M\max_{[-M,M]}\theta \leq g(.,u(.))\leq g^+(.,-\Vert u\Vert_\infty) + 2M\max_{[-M,M]}\theta ,\quad \hbox{ a.e. in }Q,.
\end{equation}
\item As we will see, the main achievement of the condition $(\mathcal G_3)$ is some kind of à priori $L^\infty$ estimates. These conditions are accomplish in many practical situations. For instance they are fulfilled in the case where $g(.,r)\in L^\infty(Q),$ for any $r\in \RR,$ and there exists $w\geq 0 $ solution of the following autonomous ODE
\begin{equation}\label{inf1}
\dot \omega= \omega\Vert (\nabla \cdot V)^-\Vert_\infty + \Vert g(.,w)\Vert_\infty \quad \hbox{ in }(0,T),\quad \hbox{ and } \omega(0)=\Vert u_0\Vert_\infty. \end{equation}
Actually, in this case it is enough to take $w_2(t)=-w_1(t)=w(t),$ for any $t\in [0,T).$ This is fulfilled for instance in the case where the application $r\in \RR\to \Vert g(.,r)\Vert_\infty $ is locally Lipschitz and $ \Vert g(.,0)\Vert_\infty =0.$ However, one needs to be careful with the choice of $T$ to fit it on with the maximal time for the solution of the ODE above. This may generates local (and not necessary global) existence of a solution even if $g(t,x,r)$ is well defined for any $t\geq 0.$ \item Particular example for $g$ may be given as follows : \begin{enumerate}
\item \label{remg2a}If $g(.,r)=f(.)$, a.e. in $Q$ and for any $r\in \RR,$ where $f\in L^\infty(Q),$ then it enough to take
$$w_2(t)=-w_1(t)= \left( \Vert u_0\Vert_\infty + \int_0^t \Vert f(t)\Vert_\infty \right)e^{t\Vert (\nabla \cdot V)^-\Vert_\infty},\quad \hbox{ for any }t\in (0,T) . $$
\item If $g(.,r)=f(.)\: r$, a.e. in $Q$ and for any $r\in \RR,$ where $f\in L^\infty(Q),$ then it enough to take $$w_2(t)=-w_1(t) = \Vert u_0\Vert_\infty e^{t\Vert (\nabla \cdot V)^-\Vert_\infty+ \int_0^t \Vert f(t)\Vert_\infty },\quad \hbox{ for any }t\in (0,T) . $$
\item If $\nabla \cdot V\geq 0$ and $g(t,x,r)= r^2,$ one can take $w_1(t)=\frac{\Vert u_0\Vert }{1-t\: \Vert u_0\Vert} $ and $w_2(t)=\frac{-\Vert u_0\Vert }{1+t\: \Vert u_0\Vert} $, for $i=1,2.$ But, in this case $T$ needs to be taken such that $T\leq 1/\Vert u_0\Vert_\infty.$
\end{enumerate}
\item Thanks to the remarks above, one sees that $\nabla \cdot V^+$ is less involved in the existence of a solution than $g$ and $\nabla \cdot V^-$.
\end{enumerate} \end{remark}
\begin{theorem}\label{texistg} Assume $u_0\in L^2(\Omega)$ and $V\in W^{1,2}(\Omega)$ is such that $\nabla \cdot V\in L^\infty(\Omega)$ and satisfies the outpointing condition \eqref{HypV0}. Under the assumption $(\mathcal G_1)$, $(\mathcal G_2)$ and $(\mathcal G_3)$, the problem \eqref{pmeg} has a unique weak solution $u_m$ in the sense of Definition \ref{defws} with $f=g(.,u).$ Moreover, we have
\begin{enumerate}
\item $u$ is the unique mild solution of the Cauchy problem \eqref{Cauchypb}
with $f(.)= g(.,u(.))$ a.e. in $Q.$
\item for any $0\leq t<T,$ $\omega_1(t) \leq u(t)\leq \omega_2(t) $ a.e. in $\Omega.$
\end{enumerate} \end{theorem}
\begin{remark}
See that in the case where, for any $r\in \RR,$ $g(.,t)=f$ a.e. in $Q,$ with $f\in L^\infty(Q)$ we retrieve the $L^\infty$-estimate \eqref{lquevol} for $q=\infty.$ Indeed, thanks to Theorem \ref{texistg} and Remark \ref{remg} (see the item 3-(a)), we see that
\begin{eqnarray*}
\Vert u(t)\Vert_\infty &\leq& \left( \Vert u_0\Vert_\infty + \int_0^t \Vert f(t)\Vert_\infty \right)e^{t\Vert (\nabla \cdot V)^-\Vert_\infty}, \quad \hbox{ for any } t\in (0,T) \\ \\
&\leq& \left( \Vert u_0\Vert_\infty + \int_0^T \Vert f(t)\Vert_\infty \right)e^{T\Vert (\nabla \cdot V)^-\Vert_\infty} = M_\infty .
\end{eqnarray*}
\end{remark}
\begin{corollary} \label{cexistg} Assume $0\leq u_0\in L^2(\Omega)$ and $V\in W^{1,2}(\Omega)$ is such that $\nabla \cdot V\in L^\infty(\Omega)$ and satisfies the outpointing condition \eqref{HypV0}. If
\begin{equation}
( \mathcal G_4) \quad 0 \leq g(.,0) \hbox{ a.e. in }
Q .\end{equation}
and, there exists $\omega \in W^{1,\infty}(0,T)$ such that $0\leq u_0\leq w(0)$ a.e. in $\Omega$ and for any $t\in (0,T),$
\begin{equation}\label{inf}
\dot \omega(t)+ \omega(t)\nabla \cdot V \leq g(t,.,\omega(t))\quad \hbox{ a.e. in }Q
\end{equation}
then the solution of \eqref{pmeg} satisfies
\begin{equation}\label{solpos}
0\leq u(t)\leq \omega_2(t) ,\quad \hbox{ a.e. in }\Omega,\hbox{ for any }t\in (0,T).
\end{equation} \end{corollary} \begin{proof} This is a simple consequence of Theorem \ref{texistg} where we take $w_1\equiv 0$. \end{proof}
\begin{remark}
A typical example of $g$ satisfying the assumption of Corollary \ref{cexistg} may be given by $g(.,r)=\mu(r),$ a.e. in $Q,$ for any $r\geq 0,$ with $0\leq \mu\in \mathcal{C}(\RR^+),$ and there exists $w \geq 0 $ such that
$$ \Vert \nabla \cdot V \Vert_\infty \leq \frac{\mu(w)}{w} .$$ \end{remark}
\begin{proof}[\textbf{Proof of Theorem \ref{texistg}}] Let $F\: :\: [0,T)\times L^1(\Omega)\to L^1(\Omega)$ be given by
$$F(t,z(.))= g(t, .,(z(.) \vee (-M)) \wedge M ) \quad \hbox{ a.e. in }\Omega, \hbox{ for any }(t,z)\in [0,T)\times L^1(\Omega), $$
where $M:=\max(\Vert \omega_1\Vert_\infty,\Vert \omega_2\Vert_\infty) .$
Thanks to Remark \ref{remg}, one sees that $F$ satisfies all the assumptions of Theorem \ref{abstractRevol}. Then, thanks to Theorem \ref{Crandall-Liggett}, we consider $u\in \mathcal{C}([0,T),L^1(\Omega))$ the mild solution of the evolution problem
\begin{equation}
\left\{\begin{array}{ll}
u_t + \mathcal{A}_m u\ni F(.,u)\quad & \hbox{ in }(0,T)\\ \\
u(0)=u_0.
\end{array} \right.
\end{equation}
Thanks to \eqref{explicitcondg}, it is clear that
$ F(.,u) \in L^2(Q),$ so that, using Proposition \ref{pconv}, we can deduce that $u$ is a weak solution of \eqref{pmef}. The uniqueness follows from the equivalence between weak solution and mild solution as well as the uniqueness result of Theorem \ref{abstractRevol}. To end up the proof, it is enough to show that $\omega_1(t) \leq u(t)\leq \omega_2(t) $ a.e. in $\Omega,$ for any $0\leq t<T.$ Indeed, in particular this implies that $F(t,u(t))=g(t,.,u(t)),$ and the proof of existence is complete. To this aim, we use Theorem \ref{tcompcmef} with the the fact that $\omega_2$ is a weak solution of \eqref{pmef} with $f= \dot \omega_2+ \omega_2\: \nabla \cdot V$, to see that
\begin{eqnarray}
\frac{d}{dt} \int (u-\omega_2)^+ \: dx &\leq & \int_{[u\geq \omega_2 ]} (g(.,u)- \dot \omega_2- \omega_2 \nabla \cdot V ) \: dx\\
&\leq& \int_{[u\geq \omega_2 ]} ( g(.,u) -g(.,\omega_2) ) \: dx \\
&\leq& \max_{[\omega_1,\omega_2]}\theta\: \int (u-\omega_2)^+ \: dx.
\end{eqnarray}
Applying Gronwall Lemma and using the fact that $u(0)\leq \omega_2(0),$ we obtain $u(t)\leq \omega_2$ a.e. in $Q.$ The proof of $u\geq \omega_1$ in $Q$ follows in the same way by
proving that
$$ \frac{d}{dt} \int (\omega_1-u)^+ \leq \max_{[\omega_1,\omega_2]}\theta\: \int (\omega_1-u)^+ .$$
Thus the results of the theorem.
\end{proof}
Now, for the limit of the solution of \eqref{pmeg}, thanks to Theorem \ref{tconvumSst} and Theorem \ref{trcontinuity}, we have the following result.
\begin{theorem} Assume $V\in W^{1,2}(\Omega), $ $\nabla \cdot V\in L^\infty(\Omega)$ and $V$ satisfies the outpointing condition \eqref{HypsupportV}. Let $g_m$ be a sequence of Carathéodory applications satisfying $(\mathcal G_1)$, $(\mathcal G_2)$ and $(\mathcal G_3)$with $\theta$ independent of $m.$ For any $u_{0m}\in L^2(\Omega)$ being a sequence of initial data let $u_m$ be the sequence of corresponding solution of \eqref{pmeg}. If
\begin{equation}\label{convgm}
g_m(.,r)\to g(.,r),\quad \hbox{ in }L^1(Q),\quad \hbox{ for any }r\in \RR,
\end{equation}
and
\begin{equation}\label{convu0m}
u_{0m}\to u_0,\quad \hbox{ in }L^1(\Omega), \quad \hbox{ and }\vert u_0\vert \leq 1\hbox{ a.e. in }\Omega,
\end{equation}
then, we have
\begin{enumerate}
\item $u_m\to u$ in $\mathcal{C}([0,T),L^1(\Omega))$
\item $u_m^m\to p$ in $L^2(0,T;H^1_0(\Omega))$-weak
\item $(u,p)$ is the solution of the Hele-Shaw problem
\begin{equation} \label{hlsg}
\left\{ \begin{array}{ll}
\left. \begin{array}{l}
\displaystyle \frac{\partial u }{\partial t} -\Delta p +\nabla \cdot (u \: V)=g(.,u) \\
u\in \sign(p)\end{array}\right\} \quad & \hbox{ in } Q \ \\ \\
\displaystyle u= 0 & \hbox{ on }\Sigma \\ \\
\displaystyle u (0)=u _0 &\hbox{ in }\Omega,\end{array} \right.
\end{equation}
in the sense that $(u,p)$ is the solution of \eqref{evolhs0} with $f(.)=g(.,u(.))$ a.e. in $Q$ satisfying $u(0)=u_0.$
\end{enumerate}
\end{theorem}
\begin{proof} To begin with we prove compactness of $u_m$ in $\mathcal{C}([0,T);L^1(\Omega)).$ We know that $u_m$ is the mild solution of the sequence of Cauchy problems \begin{equation}
\left\{\begin{array}{ll}
u_t + \mathcal{A}_m u\ni F_m(.,u) \quad & \hbox{ in }(0,T)\\ \\
u(0)=u_{0m},
\end{array} \right. \end{equation} where, for a.e. $t\in (0,T)$, $F_m(t,z) = g_m(t,.,z(.))\vee (-M)) \wedge M )),$ a.e. in $\Omega,$ for any $z\in L^1(\Omega),$ and
$$ M:=\max( \Vert \omega_1\Vert_\infty, \Vert \omega_2\Vert_\infty) . $$
Thanks to \eqref{explicitcondg}, one sees that $F_m$ satisfies all the assumptions of Theorem \ref{trcontinuity}. This implies, by Theorem \ref{trcontinuity}, that \begin{equation}\label{convum2}
u_m\to u,\quad \hbox{ in }\mathcal{C}([0,T);L^1(\Omega)), \hbox{ as }m\to\infty. \end{equation} Thus the compactness of $u_m.$ On the other hand, remember that $u_m$ is a weak solution of
\begin{equation}
\left\{ \begin{array}{ll}
\displaystyle \frac{\partial u }{\partial t} -\Delta u^m +\nabla \cdot (u \: V)=f_m \quad & \hbox{ in } Q \\ \\
\displaystyle u= 0 & \hbox{ on }\Sigma \\ \\
\displaystyle u (0)=u _{0m} &\hbox{ in }\Omega.
\end{array} \right. \end{equation} with $f_m:=g(.,u_m).$ Using again \eqref{explicitcondg}, \eqref{convgm} and \eqref{convum2}, we see that
$$f_m\to g(.,u)\quad \hbox{ in }L^1(Q),\quad \hbox{ as }m\to\infty.$$
So, by Theorem \ref{treglimum}, we deduce that $u_m^m\to p$ in $L^2(0,T;H^1_0(\Omega))$-weak and
$(u,p)$ is the solution of the Hele-Shaw problem \eqref{hlsg}. At last the uniqueness follows from the $L^1-$comparison results of the solutions of the Hele-Shaw problem \eqref{evolhs0} (cf. \cite{Igshuniq}) as well as the assumption ($\mathcal G_2)$ (one can see also \cite{IgshuniqR} for more details on Reaction-Diffusion Hele-Shaw flow with linear drift).
\end{proof}
\section{Appendix} \subsection{Reminder on evolution problem governed by accretive operator} \setcounter{equation}{0}
Our aim here is to remind the reader on some basic tools on $L^1-$nonlinear semi-group theory we use in this paper. We are interested in PDE which can be be written in the following form \begin{equation}\label{abstractevol}
\left\{ \begin{array}{ll}
\frac{du}{dt} +B u \ni f \quad & \hbox{ in } (0,T)\\ \\
u(0)=u_{0},
\end{array}\right. \end{equation} where $B$ is a possibly multivalued operator defined on $L^1(\Omega)$ by its graph $$B=\left\{ (x,y)\in L^1(\Omega)\times L^1(\Omega)\: :\: y\in Bx\right\},$$ $f\in L^1(0,T;L^1(\Omega))$ and $u_0\in L^1(\Omega).$ An operator $B$ is said to be accretif in $L^1(\Omega)$ if and only if the operator $J_\lambda := (I+\lambda\: B)^{-1}$ defines a contraction in $L^1(\Omega),$ for any $\lambda >0$ ; i.e. if for $i=1,2,$ $(f_i-u_i)\in \lambda Bu_i,$ then $\Vert u_1-u_2\Vert_1 \leq \Vert f_1-f_2\Vert_1 .$
To study the evolution problem \eqref{abstractevol} in the framework of nonlinear semi-group theory in the Banach space $L^1(\Omega),$ the main ingredient is to use the operator $J_\lambda,$ through the Euler-Implicit time discretization scheme. For an arbitrary $n\in \NN^*$ such that $0<\varepsilon:=T/n\leq \varepsilon_0,$ we consider the sequence of $(u_i,p_i)_{i=0,...n}$ given by : \begin{equation}
u_i+\varepsilon B u_i\ni \varepsilon f_i +u_{i-1},\quad \hbox{ for }i=1,...n, \end{equation} where, for each $i=0,...n-1,$ $f_i$ is given by $$f_i = \frac{1}{\varepsilon } \int_{i\varepsilon}^{(i+1)\varepsilon } f(s)\: ds ,\quad \hbox{ a.e. in }\Omega. $$ Then, for a given $\varepsilon-$time discretization $t_i=i\varepsilon,$ $i=0,...n,$ we define the $\varepsilon-$approximate solution \begin{equation}
u_\varepsilon:= \sum_{i=0} ^{n-1 } u_i\chi_{[t_i,t_{i+1})}, \end{equation} and its linear interpolate given by \begin{equation}
\tilde u_\varepsilon(t) = \sum_{i=0} ^{n-1 } \frac{(t-t_{i})u_{i+1} - (t-t_{i+1})u_{i} }{t_{i+1} -t_{i} } \: \chi_{[t_i,t_{i+1})}(t) ,\quad \hbox{ for any }t\in [0,T). \end{equation}
In particular, one sees that $u_\varepsilon,$ $\tilde u_\varepsilon$ and $f_\varepsilon$ satisfies the following $\varepsilon-$approximate dynamic \begin{equation}
\frac{d\tilde u_\varepsilon}{dt} +Bu_\varepsilon \ni f_\varepsilon,\quad \hbox{ in }(0,T). \end{equation} The main goal afterwards is to let $\varepsilon\to 0,$ to cover the ''natural'' solution of the Cauchy problem \eqref{abstractevol}. The following theorem known as Crandall-Liggett theorem (at least in the case where $f\equiv 0,$ cf. \cite{CrLi}) pictures the limit of $u_\varepsilon$ and $\tilde u_\varepsilon.$
\begin{theorem}\label{Crandall-Liggett}
Let $B$ be an accretive operator in $L^1(\Omega)$ and $u_0\in \overline{D(B)}.$ If for each $\varepsilon>0,$ the $\varepsilon-$approximate solution $u_\varepsilon$ is well defined, then there exists a unique $u\in \mathcal{C}([0,T),L^1(\Omega))$ such that $u(0)=u_0,$
\begin{equation}
u_\varepsilon \to u\quad \hbox{ and } \quad \tilde{u}_\varepsilon \to u \quad \hbox{ in }\mathcal{C}([0,T),L^1(\Omega)), \hbox{ as }\varepsilon\to 0.
\end{equation}
The function $u$ is called the mild solution of the evolution problem \eqref{abstractevol}. Moreover, if $u_1$ and $u_2$ are two mild solutions associated with $f_1$ and $f_2,$ then there exists $\kappa\in L^\infty(\Omega),$ such that $\kappa\in \sign(u_1-u_2)$ a.e. in $Q,$ and
\begin{equation}
\frac{d}{dt} \Vert u_1-u_2\Vert_1 \leq \int_{[u_1=u_2]} \vert f_1-f_2\vert \: dx +\int_{[u_1\neq u_2]} \kappa\: (f_1-f_2) \: dx ,\quad \hbox{ in }\mathcal{D}'(0,T).
\end{equation} \end{theorem}
On sees that this theorem figures out in a natural way a solution to the Cauchy problem \eqref{abstractevol} to settle existence and uniqueness questions for the associate PDE. However, in general we do not know in which sense the limit $u$ satisfies the concluding PDE ; this is connected to the regularity of $u$ as well as to the compactness of $ \frac{d\tilde u_\varepsilon}{dt}.$ We refer interested readers to \cite{Barbu} and \cite{Benilan} for more developments and examples in this direction. One can see also the book \cite{Br} in the case of Hilbert space, for which the concept of accretive operator is appointed by monotone graph notion.
One sees that besides the accretivity (monotinicity in the case of Hilbert space) the well posedness for the ''generic'' associate stationary problem
$$u+\lambda \: Bu\ni g,\quad \hbox{ for a given }g $$
is first need. Thereby, a sufficient condition for the results of Theorem \ref{Crandall-Liggett} is given by the so called range condition
\begin{equation}
\overline{\R(I+\lambda B)} =L^1(\Omega),\quad \hbox{ for small } \lambda>0 . \end{equation} Indeed, in this case Euler-Implicit time discretization scheme is well pose for any $i=0,...n-1,$ and the $\varepsilon-$approximate solution is well defined (for small $\varepsilon >0$). Then the convergence to unique mild solution $u$ follows by accretivity (monotinicity in the case of Hilbert space).
In particular, Theorem \ref{Crandall-Liggett} enables to associate to each accretif operator $B$ satisfying the range condition a nonlinear semi-group of contraction in $L^1(\Omega).$ It is given by Crandall-Ligget exponential formula \begin{equation}
e^{-tB} u_0= L^1-\lim \left( I+\frac{t}{n} B\right)^{-n} u_0, \quad \hbox{ for any }u_0\in \overline{\mathcal{D}(B)}. \end{equation} In other words the mild solution of \eqref{abstractevol} with $f\equiv 0$ is given by $e^{-tB}u_0.$
The attendance of a reaction in nonlinear PDE hints to study evolution problem of the type
\begin{equation}\label{abstractevolF}
\left\{ \begin{array}{ll}
\displaystyle\frac{du}{dt}+Bu\ni F(.,u) \quad & \hbox{ in } (0,T)\\ \\
\displaystyle u(0)=u_0 ,
\end{array}\right.
\end{equation} where $F\: :\: (0,T)\times L^1(\Omega)\to L^1(\Omega),$ is assumed to be Carathéodory, i.e. $ F(t,z)$ is measurable in $t\in (0,T)$ and continuous in $z\in L^1(\Omega).$ To solve the evolution problem \eqref{abstractevolF} in the framework of $\varepsilon-$approximate/mild solution, we say that $u\in \mathcal{C}([0,T);L^1(\Omega))$ is a mild solution of \eqref{abstractevolF} if and only if $u$ is a mild solution of \eqref{abstractevol} with $f(t)=F(t,u(t))$ for a.e. $t\in (0,T).$ Existence and uniqueness are more or less well known in the case where $F(t,r)=f(t)+F_0(r)$, with $f(t)\in L^1(\Omega),$ for a.e. $t\in [0,T),$ and $F_0$ a Lipschitz continuous function in $\RR$. The following theorems set up general assumptions on $F$ to ensure existence and uniqueness of mild solution for \eqref{abstractevolF}, as well as continous dependence with respect to $u_0$ and $F.$ We refer the readers to \cite{BeIgsing} for the detailed of proofs in abstract Banach spaces.
To call back these results, we assume moreover that $F$ satisfies the following assumptions :
\begin{itemize}
\item[$(F_1)$] There exists $k\in L^1_{loc} (0,T) $ such that
\begin{equation}
\int ( F(t,z)-F(t,\hat z))\: \so(z-\hat z) \: dx \leq k(t)\: \Vert z-\hat z\Vert_1, \quad \hbox{ a.e. }t\in (0,T),
\end{equation}
for every $z,\ \hat z\in \overline{D(B)} .$
\item[$(F_2)$] There exists $c\in L^1_{loc} (0,T) $ such that
\begin{equation}
\Vert F(t,z)\Vert_1 \leq c(t), \quad \hbox{ a.e. }t\in (0,T)
\end{equation}
for every $z\in \overline{D(B)} .$ \end{itemize} In particular, one sees that under these assumptions, $F(.,u)\in L^1_{loc}(0,T;L^1(\Omega)) $ for any $u\in$ $\mathcal{C}([0,T);L^1(\Omega)).$
\begin{theorem}\label{abstractRevol} (cf. \cite{BeIgsing})
If $B$ be an accretive operator in $L^1 (\Omega)$ such that $J_\lambda$ well defined in a dense subset of $L^1(\Omega),$ then, for any $u_0\in\overline {D(B)} $ there exists a unique mild solution $u$ of \eqref{abstractevolF} ; i.e. $u$ is the unique function in $\mathcal{C}([0,T);X)$, s.t. $u$ is the mild solution of
\begin{equation}
\left\{ \begin{array}{ll}
\displaystyle\frac{du}{dt}+Bu\ni f \quad & \hbox{ in } (0,T)\\ \\
\displaystyle u(0)=u_0,
\end{array}\right.\end{equation}
with $f(t)=F(t,u(t))$ a.e. $t\in (0,T).$ \end{theorem}
Another important results concerns the continuous dependence of the solution with respect to the operator $B$ as well to the data $f_n$ and $u_{0n}$ is given in the following theorem. The proof may be found in \cite{BeIgsing}.
\begin{theorem}\label{trcontinuity} (cf. \cite{BeIgsing})
For $m=1,2, ... ,$ let $B_m$ be an accretive operators in $L^1(\Omega)$ satisfying the range condition and $F_m\ :\ (0,T)\times \overline \mathcal{D}(B_m ) \to L^1 (\Omega)$ a Carathéodory applications satisfying $(F_1)$ and $(F_2)$ with $k$ and $c$ independent of $m.$ For each $m=1,2,... $ we consider $u_{0m} \in \overline \mathcal{D}( B_m)$ and $u_m$ the mild solution of the evolution problem
\begin{equation}
\left\{ \begin{array}{ll}
\displaystyle\frac{du}{dt}+B_mu\ni f_m \quad & \hbox{ in } (0,T)\\ \\
\displaystyle u(0)=u_{0m},
\end{array}\right.\end{equation}
with $f_m=F_m(.,u)$.
If, there exists an accretive operators $B$ in $L^1(\Omega)$ and a Carathéodory $F\ :\ (0,T)\times \overline {D(B)} \to L^1 (\Omega)$ such that \begin{itemize}
\item[a)] $(I+\lambda B_m)^{-1} \to (I+\lambda B)^{-1}\quad \hbox{ in }L^1(\Omega), \hbox{ for any }0<\lambda <\lambda_0$
\item[b)] $F_m(t,z_m) \to F(t,z)$ in $L^1(\Omega),$ for a.e. $t\in (0,T),$ and for any $z_m\in \overline{D(B_m)}$ such that $\lim_{m\to \infty } z_m=z\in\overline {D(B)}. $
\item[c)] there exists $u_0\in \overline {D(B)},$ such that $ u_{0m}\to u_{0},$
\end{itemize}
then
\begin{equation} \label{convumabstract}
u_m\to u, \quad \hbox{ in } \mathcal{C}([0,T),L^1(\Omega)),
\end{equation}
and $u $ is the unique mild solution of
\begin{equation}
\left\{ \begin{array}{ll}
\frac{du}{dt} +B u \ni F(.,u)\quad & \hbox{ in } (0,T)\\ \\
u(0)=u_{0}.
\end{array}\right.
\end{equation} \end{theorem}
\vspace*{10mm}
\end{document} | arXiv |
Improvement and prediction of secondary metabolites production under yeast extract elicitation of Azadirachta indica cell suspension culture using response surface methodology
Reza Farjaminezhad ORCID: orcid.org/0000-0001-8768-08191 &
Ghasemali Garoosi ORCID: orcid.org/0000-0001-9144-75851
Neem is a medicinal plant used as antimalarial, antibacterial, antiviral, insecticide, and antimicrobial drug. This study aimed to investigate and predict the effect of yeast extract and sampling time on cell growth, secondary metabolites synthesis, SQS1 and MOF1 genes expression by response surface methodology. The highest fresh and dry cell weights were 580.25 g/L and 21.01 g/L, respectively obtained 6 days after using 100 mg/L yeast extract. The highest azadirachtin accumulation and production were 16.08 mg/g DW and 219.78 mg/L obtained 2 and 4 days, respectively after using 25 mg/L yeast extract. Maximum mevalonic acid accumulation (1.75 mg/g DW) and production (23.77 mg/L) were observed 2 days after application of 50 mg/L yeast extract. The highest amount of squalene accumulation (0.22 mg/g DW) and production (4.53 mg/L) were achieved 4 days after using 50 mg/L yeast extract. Prediction results exhibited the highest azadirachtin accumulation (13.61 mg/g DW) and production (190.50 mg/L), mevalonic acid accumulation (0.50 mg/g DW) and production (5.57 mg/L), and squalene accumulation (0.30 mg/g DW) by using 245 mg/L yeast extract for 2 days, 71 mg/L yeast extract for 2 days, 200 mg/L yeast extract for 4.96 days, without yeast extract for 6.54 days and 4 days, respectively. Also, it was predicted that the highest squalene production is achieved by long-term exposure to high concentrations of yeast extract. The qRT-PCR analysis displayed the maximum relative gene expression of SQS1 and MOF1 by using 150 and 25 mg/L yeast extract for 4 and 2 days treatment.
Plants are a rich source of medicinal compounds which are used to make medicine. About a quarter of the drugs approved by the Food and Drug Administration and European Medicines Agency are produced from plants, which shows their importance (Borkotoky and Banerjee 2020; Thomford et al. 2018). Neem (Azadirachta indica) is a member of the Meliaceae family and very important in traditional medicine. Studies have shown that neem leaf extract has free radical scavenging and anti-inflammatory activity and inhibits HSV-1 and MHV viruses (Alzohairy 2016; Sarkar et al. 2020; Tiwari et al. 2010). It has been reported that neem leaf extract can increase immunity against HIV/AIDS by increasing CD4+ cell levels (Mbah et al. 2007). Also, it has been reported that neem extract may be used against the COVID-19 infection (Roy and Bhattacharyya 2020). Neem tree has a variety of secondary metabolites, including azadirachtin, mevalonic acid, squalene, nimbin, nimbiol, polyphenolic flavonoids, etc. (Borkotoky and Banerjee 2020; Farjaminezhad and Garoosi 2020).
Secondary metabolites are not necessary to maintain the plant life cycle but play an important role in environmental adaptation (Park et al. 2020). Secondary metabolites have various uses and are mainly used as drugs, flavorings, fragrances, pigments, bio-pesticides, and food additives (Murthy et al. 2014). Studies show that the production of secondary metabolites depends on plant genetics, environmental factors, climate, season, growth period, plant parts, pre- and post-harvest processes, and extraction methods (Açıkgöz 2020). All of these factors create problems in the production of secondary metabolites in the traditional method and increases the cost of production (Ramachandra Rao and Ravishankar 2002). Therefore, plant tissue culture is used to produce secondary metabolites due to its reliability and predictability, independence from geographical, seasonal and environmental factors, modification or elimination of unwanted taste, and production of high quality and standard product (Abd El-Salam et al. 2015). Cell suspension culture is the best option to increase the production of secondary metabolites and respond to the increasing industrial demand for secondary metabolites (Rani et al. 2020).
There are several strategies to improve the production of secondary metabolites in plant tissue and cell culture (Park et al. 2020). One of these strategies is the use of elicitors. Elicitors are compounds that stimulate the production of secondary metabolites by biochemical changes in the plant (Namdeo 2007). Elicitors do this by activating the signal transduction cascades (Karalija et al. 2020). These signals act as stress factors and regulate enzyme activity and the production of secondary metabolites (Sharma and Agrawal 2018). Elicitors are divided into two types depending on the origin including biotic and abiotic. Biotic elicitors are produced from microbial or plant sources. Yeast extract is one of the biotic elicitors which derived from microbial sources (Ramirez-Estrada et al. 2016). Yeast extract contains various compounds such as chitin, β-glucan, glycopeptides, and ergosterol that are involved in plant defense responses (Baenas et al. 2014). It is one of the most common natural elicitors used in in vitro culture to induce secondary metabolites production (Cheng et al. 2013; Karalija et al. 2020). It has been used also successfully in various studies to improve the production of secondary metabolites such as rosmarinic acid (Gonçalves et al. 2019), sanguinarine (Guízar-González et al. 2016), xanthone (Krstić-Milošević et al. 2017) isoflavonoid (Rani et al. 2020), plumbagin (Singh et al. 2020) and astragalosides (Park et al. 2020).
An experimental design is a set of tools for studying a system which includes planning and performing a set of experiments to determine the impact of the studied variables on that system. In such experiments, the obtained valid model is a model that contains valuable information about the effect of experimental conditions on the measurement response rate. The required experiment is performed in such a way in which a large amount of information is obtained from a limited number of experiments. Once the appropriate model is obtained, it can be used to predict future observations within the original design range. Therefore, it is necessary to use an appropriate experimental design to develop and optimize a wide range of laboratory studies. The response surface methodology (RSM), which includes statistical and mathematical tools, was first used in chemical experiments for designing and analyzing response surfaces. The experimental design method and the response surface methodology are closely related together and using the RSM methodology is based on experimental data (Mäkelä 2017). This statistical model is an effective tool for optimizing complex processes and save time during the experimental phase (Menezes Maciel Bindes et al. 2019). In the present study the effect of yeast extract and sampling time on cell growth, azadirachtin, mevalonic acid and squalene accumulation and production were investigated and predicted their effects on azadirachtin, mevalonic acid and squalene accumulation and production by response surface methodology. Also, the effect of yeast extract was studied on the squalene synthase 1 (SQS1) and squalene epoxidase 1 (MOF1) genes expression. This is the first comprehensive study on gene expression and prediction on the effect of yeast extract on cell suspension culture of neem and so far no study has been done.
Plant material and cell suspension culture establishment
The leaves of neem were collected from Bandar Abbas city of Iran. The leaves were surface sterilized with ethanol and sodium hypochlorite and cultured on MS medium containing 1 mg/L picloram and 2 mg/L kinetin. Then cultures were maintained in the growth chamber at 25 ± 2 °C in the dark. The friable calli were transferred to the liquid MS medium with the same concentrations of picloram and kinetin and kept on a rotary shaker at 110 rpm and 26 ± 2 °C in the dark and sub-cultured every 12 days (Farjaminezhad and Garoosi 2019).
Elicitation with yeast extract
The cell suspension cultures were transferred to 100 mL Erlenmeyer flasks containing 25 mL liquid MS medium supplemented with 1 mg/L picloram and 2 mg/L kinetin with an initial cell density of 2.6 × 105 (SCV = 8%). The stock solution of yeast extract (Merck, Germany) was prepared by dissolving of yeast extract in distilled water and then filtered using a 0.22 µm syringe filter. According to our previous study growth curve (Farjaminezhad and Garoosi 2019), eight days after cell culture different concentrations of yeast extract including 0, 25, 50, 100, 150 and 200 mg/L were added to cell suspension cultures. The cultures were kept on a rotary shaker at 110 rpm and 26 ± 2 °C in the dark and sampling was performed after 2, 4, 6, 8, 10 and 12 days of each treatment.
Fresh and dry cell weight measurement
Fresh and dry cell weights were measured as described by Godoy-Hernández and Vázquez-Flota (2006) with a little modification. For this purpose the cells were collected by Whatman No. 1 filter paper using Büchner funnel under vacuum condition for 30 s and weighed immediately. Then, the collected cells transferred to the oven at 50 °C for 72 h and weighed immediately for dry cell weight.
Mevalonic acid, squalene and azadirachtin extraction
Azadirachtin, mevalonic acid, and squalene were extracted by using Rafiq and Dahot (2010) method with some modifications. One milliliter of dichloromethane was added to 100 mg of dried and powdered cells and then sonicated for 25 min at room temperature. The supernatant was collected after centrifugation at 7000 rpm for 15 min. The procedure was repeated for two times. Finally, the dichloromethane was evaporated at 50 °C in a water and the samples were dried as described above. The dried samples then were re-dissolved in 1.50 mL HPLC-grade distilled water and stored at − 20 °C.
HPLC analysis
The HPLC analysis was performed using a Knauer HPLC–DAD system (DAD detector, Azura, Germany) with a Toso C18 column (TSKgel-ODS, 5 µm, 4.60 × 250 mm, Japan). The mobile phase was acetonitrile:water (10:90) at a flow rate of 0.90 mL/min. The detection wavelength for azadirachtin (Sigma, USA), mevalonic acid (Sigma, USA), and squalene (Sigma, USA) were set at 214, 270 and 195 nm, respectively. The injection volume of samples was 20 μL. The azadirachtin, squalene, and mevalonic acid accumulation were estimated from the standard curve of concentration versus the peak area. Also, the amount of azadirachtin, mevalonic acid, and squalene production was calculated by multiplying the amount of azadirachtin, mevalonic acid, and squalene accumulation in their dry cell weight, respectively (Farjaminezhad and Garoosi 2020).
RNA extraction, cDNA synthesis and qRT-PCR analysis
For RNA extraction, cDNA synthesis, and qRT-PCR analysis, those samples with the highest amount of azadirachtin accumulation at each concentration of yeast extract were used. Total RNA was extracted using an RNX-Plus kit (Cinaclon, Iran) based on the producer's protocol. The quantity of extracted RNA was measured by a Nano-Drop 200C spectrophotometer (Thermo Scientific, USA). Then, extracted total RNAs were treated with DNase I, RNase-free (Sinaclon, Iran) according to the producer's guidance to eliminate remaining genomic DNA. Single-strand cDNA was synthesized by using a mixture of 5 µg of total RNA, 0.50 µg/µL Oligo (dT)18 primer (Cinaclon, Iran) and 12.50 μL DEPC-treated water in the tube. The tube was maintained at 65 °C for 5 min and immediately transferred onto ice. Then, 2 μL 10 × reaction buffer (Cinaclon, Iran), 2 μL dNTP Mix 10 mM (Cinaclon, Iran) and 1 µL M-MuLv reverse transcriptase enzyme (200 µ/µL, Cinaclon, Iran) were added into tube and maintained at 42 °C for 60 min. The reaction was terminated by heating the mixture at 70 °C for 10 min. The reverse transcription reaction product was stored at – 20 °C untile qRT-PCR analysis. The qRT-PCR analysis performed by real-time PCR (Applied Biosystems StepOnePlus, USA) with specific primers for squalene synthase (SQS1) gene (forward: 5ʹ-GCTGAAAATGGCTGTGAGGC-3ʹ and reverse: GTCAGTCCCGAGCTGTTGAA-3ʹ), squalene epoxidase 1 (MOF1) gene (forward: 5ʹ-TCAAATCTGCGCCGTTCTCT-3ʹ and reverse: 5ʹ-AGAATGACATGCCCGTGGTT-3ʹ) and housekeeping 18S ribosomal RNA gene (forward: 5ʹ-CACCACACAACTCTCCCCAT-3ʹ and reverse: 5ʹ-ATCAACCACCGTAGTGTCGC-3ʹ). The qRT-PCR mixture contained 1 µL of synthesized cDNA (50 ng), 7.50 µL SYBR Green Premix Ex Taq II (Takara, Japan), 0.50 µL of 10 µmol of gene-specific primer pairs, and 6 µL of nuclease-free water in a final volume of 15 µL. qRT-PCR conditions consisted of primary denaturation at 95 °C for 2 min, followed by 35 cycles of denaturation at 95 °C for 30 s, annealing and extension steps at 57 °C for 30 s and at 72 °C for 30 s, respectively. Finally, the data were analyzed using a 2−ΔΔCT method (Livak and Schmittgen 2001).
Statistical analysis and experimental design by response surface methodology (RSM)
The treatment of yeast extract was performed in factorial experiment based on a completely randomized design in triplicate in which the first factor was the different concentrations of yeast extract and the second factor was different sampling times. Data were analyzed by IBM SPSS Statistics software, Version 24.0 (Armonk, NY, USA). The measured indices means compared by using Duncan's multiple range test at a probability level of 0.01. The qRT-PCR analysis of SQS1 and MOF1 genes performed in two biological and two technical replications separately. The mean comparison of relative expression of genes also was carried out using Duncan's multiple range test at a probability level of 0.01.
Response surface methodology was used to study the effects of independent variables including different concentrations of yeast extract and different sampling time as well as their interaction on accumulation and production of azadirachtin, mevalonic acid, and squalene. The sampling times were selected based on preliminary studies. The central composite design (CCD) with two variables and five different levels (− 2, − 1, 0, + 1, + 2) was used for the optimization of yeast extract concentration and sampling time. A total of 13 experiments were conducted to test the five levels of yeast extract and sampling time with full-factorial CCD. By using coded units, the experimental and predicted values for the azadirachtin, mevalonic acid, and squalene accumulation and production in terms of the different variables of yeast extract and sampling time are presented in Tables 1 and 2. The predicted response was calculated using the quadratic polynomial model. The predicted responses which were calculated by a second-order polynomial (quadratic) model is shown as following:
$$\mathrm{Y}={\upbeta }_{0}+\sum_{\mathrm{i}=1}^{\mathrm{n}}{\upbeta }_{\mathrm{i}}{\mathrm{X}}_{\mathrm{i}}+\sum_{\mathrm{i}=1}^{\mathrm{n}}{\upbeta }_{\mathrm{ii}}{\mathrm{X}}_{\mathrm{i}}^{2}+\sum_{\mathrm{i}=1}^{\mathrm{n}-1}\sum_{\begin{array}{c}j=2\\ j>i\end{array}}{\upbeta }_{\mathrm{ij}}{\mathrm{X}}_{\mathrm{i}}{\mathrm{X}}_{\mathrm{j}},$$
where, Y is the response variable, β0 is the average response obtained during replicated experiments of the CCD; βi, βii, and βij are the linear, quadratic, and cross-product effects, respectively; Xi and Xj are the independent coded variables. Response surface regression coefficient and Analysis of Variance (ANOVA) predicted the effects of independent variables on azadirachtin, mevalonic acid, and squalene accumulation and production from cell suspension culture of neem. The data were analyzed using Design Expert (12.0.0 version) software.
Table 1 Experimental and predicted values for azadirachtin and mevalonic acid accumulation and production optimized with central composite design (CCD)
Table 2 Experimental and predicted values for sequalene accumulation and production optimized with central composite design (CCD)
Fresh and dry cell weight
Figure 1a–c shows the leaf explant, callus production, and cell suspension culture of A. indica. The obtained results demonstrated that different yeast extract concentrations, sampling time, and their interactions affected both the fresh and dry cell weight (Table 3). The investigation of the main effect of the application of yeast extract showed that the fresh and dry cell weight decreases; so that, the use of yeast extract alone and regardless of the sampling time had a negative effect on neem cell suspension culture growth. The most suitable condition for neem cell suspension growth was control without any concentrations of yeast extract. Under these conditions, the mass of fresh and dry cell weight were maximized, which were 413.41 g/L and 14.47 g/L, respectively. Based on results, by addition 25, 50, 100, 150 and 200 mg/L of yeast extract in comparing to the control, the fresh cell weight reduced 29.61, 35.61, 27.73, 45.52 and 48.66% and the dry cell weight reduced 22.53, 25.57, 8.64, 27.99 and 33.86%, respectively. The sampling time of 6 and 4 days gave the highest fresh cell weight and dry cell weight of 410.69 g/L and 16.23 g/L. The fresh cell weight increased from the 2nd day to the 6th day of sampling time and decreased from the 6th day to the 12th day; meanwhile, the dry cell weight increased from the 2nd day to the 4th day of sampling time and decreased from the 4th day to the 12th day. However, the fresh cell weight on the 6th day of sampling was 47.00, 1.66, 88.52, 111.52 and 104.56% higher than those on 2nd, 4th, 8th, 10th and 12th days of sampling and also dry cell weight on 4th day of sampling time was 22.67, 9.90, 80.05, 95.30 and 99.96% higher than those on 2nd, 6th, 8th, 10th and 12th days of sampling time. By interaction study of the effect of different concentrations of yeast extract and sampling times, it was found the highest fresh and dry cell weight were 580.25 g/L and 21.01 g/L on 6th day after addition 100 mg/L of yeast extract and on 4th day after using 50 mg/L of yeast extract. In this treatments, fresh cell weight increased 9.00, 50.86, 23.23, 109.97 and 146.06% compared to control, 25, 75, and 100 mg/L of yeast extract at same sampling time and the dry cell weight increased 24.17, 34.33, 21.37, 40.91 and 81.43% compared to the control, 25, 75, and 100 mg/L of yeast extract at same sampling time (Additional file 1: Table S1).
The leaf explant (a), callus production (b), established cell suspension culture of A. indica (c), and HPLC chromatograms of azadirachtin (d), mevalonic acid (e), and squalene (f)
Table 3 Analysis of variance of the effect of yeast extract and sampling times on measured indices in cell suspension culture of neem
Azaidrachtin accumulation and production
The HPLC chromatogram of azadirachtin showed in Fig. 1d. The analysis demonstrated that the applied concentrations of yeast extract, sampling times, and their interaction significantly stimulated the azadirachtin accumulation and production in treated cells compared to the control (Table 3). Accumulation and production of azadirachtin showed a dose-dependent response to the yeast extract. A more increase in azadirachtin accumulation and production was observed at 25 mg/L yeast extract. Yeast extract at the lower concentration of 25 mg/L showed the highest azadirachtin accumulation and production which was 9.67 mg/g DW and 118.53 mg/L. The azadirachtin accumulation and production from control treatment to 25 mg/L of yeast extract increased and then decreased along with increasing yeast extract concentration from 25 to 100 mg/L. At the 25 mg/L of yeast extract the azadirachtin accumulation was 161.57, 19.79, 42.45, 36.34 and 59.37% and the azadirachtin production was 119.74, 33.80, 31.16, 51.28 and 106.81% higher than control, 50, 100, 150, and 200 mg/L (Fig. 2a). In terms of sampling time, the highest azadirachtin accumulation and production were 9.20 mg/g DW and 125.65 mg/L observed at 2 days after treatment. On the 2nd day of sampling time, the azadirachtin accumulation increased 17.94, 42.22, 75.44, 58.97 and 34.02% and azadirachtin production increased 0.53, 31.53, 160.65, 207.37 and 142.42% compared to the 2nd, 4th, 8th, 10th, and 12th days, respectively (Fig. 2b). The effect of different concentrations of yeast extract along with different sampling times is shown in Additional file 1: Table S1. The best conditions for induction of azadirachtin accumulation and production were an application of 25 mg/L for 2 days (16.08 mg/g DW) and 4 days (219.78 mg/L), respectively. Under these conditions, the azadirachtin accumulation increased 462.24, 42.43, 66.11, 82.93 and 146.62% compared to the control, 50, 100, 150, and 200 mg/L yeast extract the 2nd day of sampling and the azadirachtin production increased 302.16, 54.09, 96.23, 51.01 and 191.79% compared to the control, 50, 100, 150 and 200 mg/L at same day, respectively (Additional file 1: Table S1).
The effect of different concentrations of yeast extract and sampling times (a, b) and response surface and contour plots for azadirachtin accumulation (c, d) and azadirachtin production (e, f)
Model for predicting of azadirachtin accumulation and production
The sampling times of 2, 4, 6, 8 and 10 days were chosen to RSM analysis and prediction of azadirachtin accumulation and production. The combined effects of different concentrations of yeast extract and sampling times were investigated by the response surface methodology using a central composite design (CCD). The specific interaction of different concentrations of yeast extract and sampling times with the measured and predicted response values of azadirachtin accumulation and production are shown in Table 1. In this study, the experiment No. 2 in application of 100 mg/L of yeast extract and sampling on the 2nd day had the highest amount of azadirachtin accumulation (9.89 mg/g DW) and the experiment No. 13 in using of 150 mg/L of yeast extract and sampling on the 8th day had the lowest azadirachtin accumulation (4.29 mg/g DW). Also, experiment No. 2 with an application of 100 mg/L of yeast extract and sampling time of 2 days had the highest azadirachtin production (170.37 mg/L) and experiment No. 3 with application of 50 mg/L yeast extract and sampling time of 8 days had the lowest azadirachtin production (32.44 mg/L).
Analysis of variance (ANOVA) of the results of the response surface model was presented in Table 4. In the analysis of variance of CCD, the coefficient of determination of the model for azadirachtin accumulation and production were 95.76 and 94.08%, respectively; which indicates that these actual levels can correspond to the predicted levels. Also, the p-value of the models were significant and the proposed models were appropriate. Therefore, the following formulas were calculated to predict the azadirachtin accumulation and production using yeast extract and sampling times:
$${\text{Azadirachtin accumulation }}\left( {{\text{mg}}/{\text{g DW}}} \right) \, = { 6}.{46 } + \, 0.{\text{2452 A }}{-}{ 1}.{\text{36 B }}{-}{ 1}.0{\text{9 AB }}{-} \, 0.{\text{4178 A}}^{{2}} + \, 0.{\text{2269 B}}^{{2}}.$$
$${\text{Azadirachtin production }}\left( {{\text{mg}}/{\text{L}}} \right) \, = { 121}.{32 } + { 1}.{\text{97 A }}{-}{ 38}.{\text{14 B }} + { 11}.{\text{21 AB }}{-}{ 17}.{\text{73 A}}^{{2}} {-}{ 3}.{\text{25 B}}^{{2}}.$$
Table 4 Analysis of variance of azadirachtin, mevalonic acid and squalene accumulation and production in the central composite design under influence of yeast extract and sampling times
Optimization of the response surface of azadriachtin accumulation and production
The interaction between different yeast extract concentrations and different sampling times is shown in Fig. 2. Increasing the concentration of yeast extract along with increasing the exposure time to 6 days had a significant effect on the azadirachtin accumulation. The best yeast extract concentrations and sampling times for maximum azadirachtin accumulation were between 100 and 200 mg/L and 2–4 days. The highest amount of azadirachtin accumulation in this analysis was obtained in application of 100 mg/L of yeast extract after 2 days of sampling time. However, the optimal conditions for maximizing the azadirachtin accumulation (13.607 mg/g DW) predicted two days after application of 245 mg/L of yeast extract (Fig. 2c, d). Also, the results indicated that azadirachtin production depends on sampling time. According to Fig. 2e, f, at all concentrations of yeast extract, azadirachtin production gradually decreases with increasing sampling time. The highest azadirachtin production is achieved by applying 0–150 mg/L of yeast extract and sampling time of 2–6 days. In this analysis, the highest azadirachtin production was 2 days after application of 100 mg/L of yeast extract, but it is predicted that the highest azadirachtin production with 71 mg/L is obtained by culturing of cell for 2 days at medium containing 190.50 mg/L of yeast extract.
Mevalonic acid accumulation and production
The HPLC chromatogram of mevalonic acid illustrated in Fig. 1e. The mevalonic acid accumulation and production were significantly changed under different yeast extract concentrations, sampling times, and their interactions (Table 3). Among the studied concentrations of yeast extract, 50 mg/L increased the accumulation of mevalonic acid compared to the control; and 25, 100, 150 and 200 mg/L of yeast extract decreased it in comparison to the control. With increasing the concentration of yeast extract in the culture medium from 0 to 25 mg/L, the accumulation of mevalonic acid decreased, and then with increasing the concentration of yeast extract to 50 mg/L it was increased and with increasing the concentration of yeast extract form 50 mg/L to 200 mg/L, its amount decreased again. The highest amount of mevalonic acid accumulation (0.63 mg/g DW) was obtained at 50 mg/L yeast extract, which was 17.31% higher than the control and 48.71, 381.59, 265.15 and 353.86% higher than the 25, 100, 150 and 200 mg/L of yeast extract, respectively. Therefore, higher concentrations of yeast extract in the culture medium prevented the mevalonic acid accumulation. In relation to the production of mevalonic acid, it was observed that with increasing the yeast extract concentration from 0 to 25 mg/L the production of mevalonic acid decreased and with increasing the concentration of yeast extract from 25 to 50 mg/L it was increased; however, the difference between control and application of 50 mg/L yeast extract was not statistically significant. The highest mevalonic acid production (8.45 mg/L) was obtained at 50 mg/L of yeast extract. By adding 50 mg/L of yeast extract to culture medium, the mevalonic acid production increased 8.94, 47.06, 381.84, 302.82 and 424.19% compared to the control and 25, 100, 150 and 200 mg/L of yeast extract, respectively (Fig. 3a). Therefore, the application of moderate concentrations of yeast extract had a positive effect on the accumulation and production of mevalonic acid. By investigating the effect of sampling times, we founded that prolonged exposure of neem cell suspension culture with yeast extract reduces the mevalonic acid accumulation. The highest amount of mevalonic acid accumulation with an average of 0.78 mg/g DW was observed on the second day of sampling. In general, two days after treatment the amount of mevalonic acid accumulation was 55.88, 79.76, 213.19, 915.48 and 5525.94% higher than the 4th, 6th, 8th, 10th, and 12th days, respectively. Also, between different sampling times, the highest mevalonic acid production (9.95 mg/L) was observed on the 2nd day of sampling. By culturing of neem cells for 2 days, the mevalonic acid production was increased 23.63, 50.01, 359.17, 1724.97 and 11,214.56% compared to days 4, 6, 8, 10, and 12, respectively (Fig. 3b). The interactions of yeast extract concentrations and sampling times showed that the highest mevalonic acid accumulation (1.75 mg/g DW) and production (23.77 mg/L) were obtained two days after application of 50 mg/L yeast extract. In this conditions, mevalonic acid accumulation was 8.92, 212.90, 1954.11, 334.27 and 2716.13% higher than the control, 25, 100, 150, and 200 mg/L of yeast extract and mevalonic acid production was 29.82, 216.93, 1801.60, 370.69 and 525.53% higher than the 25, 50, 75, and 100 mg/L of yeast extract on same day (Additional file 1: Table S1).
The effect of different concentrations of yeast extract and sampling times (a, b) and response surface and contour plots for mevalonic acid accumulation (c, d) and mevalonic acid production (e, f)
Model for predicting of mevalonic acid accumulation and production
Based on the results, the sampling times of 2, 4, 6, 8, and 10 days were selected for RSM analysis and prediction of mevalonic acid accumulation and production. The specific interaction of different concentrations of yeast extract and sampling times with the measured and predicted response values of mevalonic acid accumulation and production is shown in Table 1. Experiment No. 10 (6 days after addition of 200 mg/L of yeast extract) had the highest amount of mevalonic acid accumulation (0.51 mg/g DW) and experiment No. 2 (two days after application of 100 mg/L yeast extract) had the lowest of mevalonic acid accumulation. Also, experiment No. 12 with the application of 100 mg/L of yeast extract for 6 days had the highest amount of mevalonic acid production (6.97 mg/g DW) and experiment No. 2 with using 100 mg/L of yeast extract for two days produced the lowest of mevalonic acid production. Analysis of variance results revealed that the coefficient of determination (R2) of the model for mevalonic acid accumulation and production were 91.67 and 87.53%, respectively; which indicates that 91.67 and 87.53% of the actual value can correspond to the predicted value. Also, the p-value of the models were significant and the proposed models were appropriate (Table 4). Therefore, the following formulas were calculated to predict the mevalonic acid accumulation and production using yeast extract and sampling times:
$${\text{Mevalonic acid accumulation }}\left( {{\text{mg}}/{\text{g DW}}} \right) \, = \, 0.{2513 } + \, 0.0{\text{569 A }} + \, 0.0{\text{199 B }}{-} \, 0.0{\text{289 AB }} + \, 0.0{\text{313 A}}^{{2}} {-} \, 0.0{\text{364 B}}^{{2}}.$$
$${\text{Mevalonic acid production }}\left( {{\text{mg}}/{\text{L}}} \right) \, = { 5}.{26 }{-} \, 0.0{1}0{\text{1 A }} + \, 0.0{21}0{\text{ B }}{-} \, 0.{\text{2599 AB }} + \, 0.0{\text{543 A}}^{{2}} {-}{ 1}.0{\text{2 B}}^{{2}}.$$
Optimization of the response surface of mevalonic acid accumulation and production
The interaction between different yeast extract concentrations and different sampling times is shown in Fig. 3. The results showed that yeast extract concentration is the most important factor. Accordingly, gradual increases in the concentration of yeast extract lead to a decrease in the accumulation of mevalonic acid in the neem cell suspension culture. Therefore, in this analysis, the highest amount of accumulation of mevalonic acid was observed at their highest level. Depending on the concentration of yeast extract, sampling time increased or decreased the accumulation of mevalonic acid. According to Fig. 3c, d, the highest amount of mevalonic acid accumulation was obtained 6 days after applying 200 mg/L of yeast extract, but it is predicted that the highest amount of mevalonic acid accumulation with the amount of 0.50 mg/g DW was obtained by culture of the cells for 4.96 days in medium containing 200 mg/L of yeast extract. Also, according to the results of this analysis, the production of mevalonic acid depends on the sampling time. At different concentrations of yeast extract, increasing the duration of cell suspension culture exposure to yeast extract increased the amount of mevalonic production from 2 to 6 days and then decreased. The highest mevalonic acid production was obtained by exposing the cell suspension to 100 mg/L of yeast extract for 6 days. The optimal condition for maximizing the mevalonic acid production is predicted to be a cell suspension culture without yeast extract after 6.54 days, which can produces 5.57 mg/L mevalonic acid.
Squalene accumulation and production
The HPLC chromatogram of squalene is illustrated in Fig. 1f. The results showed that different concentrations of yeast extract, sampling times, and their interactions had a significant effect on squalene accumulation and production (Table 3). The study of squalene accumulation in cell suspension culture in the presence of yeast extract revealed that yeast extract inhibited squalene accumulation. Increasing the concentration of yeast extract in the culture medium the accumulation of squalene was reduced. With increasing the yeast extract concentration from 25 to 50 mg/L, the squalene accumulation was increased and then decreased. The highest squalene accumulation (0.07 mg/g DW) between different concentrations of yeast extract was obtained at the control condition. With using 25, 50, 100, 150, and 200 mg/L yeast extract squalene accumulation decreased 59.70, 5.97, 35.82, 42.27 and 64.17% than control, respectively. Analysis the data about squalene production revealed that the application of 50 mg/L of yeast extract increased squalene production compared to the control and application of 25, 100, 150, and 200 mg/L decreased it. Among different concentrations of yeast extract, the highest production of squalene (1.13 mg/L) was obtained at 50 mg/L, which was 14.85, 227.71, 66.62, 123.72 and 398.77% higher compared to the control and 25, 100, 150, and 200 mg/L, respectively (Fig. 4a). Between different sampling times, the highest amount of squalene accumulation was 0.10 mg/g DW obtained on the 4th day of sampling time. In general, 4 days after yeast extract treatment the amount of squalene accumulation comparing to 2, 6, 8, 10 and 12 days was 547.87, 44.95, 363.76, 149.67 and 361.09% higher, respectively. Investigation of different sampling times showed that during the first 4 days, the production of squalene in the cell suspension culture of neem was increased and then was decreased on further days. The highest amount of squalene production (1.74 mg/L) was observed on the 4th day of sampling. So that, on the 4th day of sampling time the amount of squalene production in comparing to the 2nd, 6th, 8th, 10th, and 12th days was 732.07, 68.63, 748.53, 343.38 and 474.63% higher, respectively (Fig. 4b). Further study of squalene accumulation and production was performed using the simultaneous examination of different concentrations of yeast extract and sampling times. In this context, the results showed that 4 days after application of 50 mg/L of yeast extract the highest amount of squalene accumulation (0.22 mg/g DW) and production (4.53 mg/L) were obtained, which were 283.93, 1333.3, 69.29, 41.45 and 923.81%; and 377.43, 1817.79, 105.54, 99.47 and 1739.84% higher in comparing to the control, 25, 100, 150, and 200 mg/L, respectively (Additional file 1: Table S1).
The effect of different concentrations of yeast extract and sampling times (a, b) and response surface and contour plots for squalene accumulation (c, d) and squalene production (e, f)
Model for predicting of squalene accumulation and production
The sampling times of 4, 6, 8, 10, and 12 days were selected for RSM analysis and prediction of squalene accumulation and production. The experimental and predicted values of yeast extract-induced squalene accumulation are shown in Table 2. Experiment No. 6 (6 days after application of 50 mg/L yeast extract) had the highest squalene accumulation (0.14 mg/g DW) and experiment No. 10 (150 mg/L of yeast extract after 10 days) had the lowest squalene production (0.01 mg/g DW). Experiment No. 3 with an application of 100 mg/L of yeast extract and sampling time of 4 days had maximum squalene production (2.21 mg/L) and experiment No. 13 with an application of 50 mg/L of yeast extract and sample time of 10 days had the lowest squalene production. The results of the analysis of variance of central composite design showed that the coefficient of determination (R2) of the model is 94.20% for squalene accumulation and 94.48% for squalene production. This indicates that 94.20 and 94.48% of the actual value of squalene accumulation and production correspond to the predicted value. Also, the p-value of the models were significant for squalene accumulation and production, which indicates the suitability of the models (Table 4). Therefore, mathematical models which were obtained to predict the squalene accumulation and production under yeast extract elicitation were as following:
$${\text{Squalene accumulation }}\left( {{\text{mg}}/{\text{g DW}}} \right) \, = \, 0.0{199 }{-} \, 0.0{\text{169 A }}{-} \, 0.0{3}0{\text{4 B }} + \, 0.0{\text{274 AB }} + \, 0.00{\text{58 A}}^{{2}} + \, 0.0{\text{135 B}}^{{2}}.$$
$${\text{Squalene production }}\left( {{\text{mg}}/{\text{L}}} \right) \, = \, 0.{215}0 \, {-} \, 0.{25}0{\text{9 A }}{-} \, 0.{\text{5438 B }} + \, 0.{\text{4834 AB }} + \, 0.0{\text{725 A}}^{{2}} + \, 0.{\text{2466 B}}^{{2}}.$$
Optimization of the response surface of squalene accumulation and production
Figure 4 shows the interaction between different yeast extract concentrations and different sampling times. According to the results of the response surface methodology, the squalene accumulation depends on the concentration of yeast extract and the sampling time. The yeast extract concentration is the most important parameter and with increasing that the squalene accumulation has reached a minimum. In the different concentrations of yeast extract except for 200 mg/L, the accumulation of squalene decreased on over times. At 200 mg/L yeast extract, the squalene accumulation was decreased till the tenth day of sampling and then was increased. Therefore, depending on the concentration of yeast extract, sampling time can have a positive or negative effect on squalene accumulation. Based on this analysis results, the highest amount of squalene accumulation was obtained 6 days after application of 50 mg/L yeast extract, but the prediction results showed that the highest squalene accumulation can be obtained at free-yeast extract medium after 4 days, which can accumulates 0.30 mg/g DW of squalene. Also, increasing the yeast extract concentration during low exposure periods reduced squalene production, but increasing yeast extract concentration at high exposure times enhanced squalene production. According to Fig. 4e, f, it can be said that using 0–100 mg/L of yeast extract for 4–8 days can produce an acceptable amount of squalene. It is predicted that the highest amount of squalene production is achieved by long-term exposure to high concentrations of yeast extract.
qRT-PCR analysis of SQS1 and MOF1 genes expression
The qRT-PCR analysis presented relative gene expression of SQS1 gene was significantly increased 68.96% at 4 days application of 150 mg/L of yeast extract compared with the control cells after 12 days. But, after the addition of 25, 50 and 100 mg/L of yeast extract for 2 days, and 200 mg/L of yeast extract for 12 days, the relative expression of SQS1 gene decreased 73.26, 74.49, 38.29 and 73.53%, respectively compared to the control after 12 days. Also, the application of yeast extract significantly up-regulated MOF1 gene. The highest relative expression of the MOF1 gene observed by addition 25 mg/L yeast extract for 2 days, which was 52.41, 14.46, 26.02, 22.51 and 39.90% higher than control on 12 days, 50 mg/L of yeast extract on 2 days, 100 mg/L of yeast extract on 2 days, 150 mg/L of yeast extract on 4 days, and 200 mg/L of yeast extract on 12 days (Fig. 5).
Relative expression profile of SQS1 and MOF1 genes normalized with 18S ribosomal RNA as an internal control
Many researchers have been reported that the use of elicitors is effective in cell growth and increasing bioactive compounds. The age of the cell culture, the duration of exposure to the elicitor, and the type of elicitor have a very effective role in increasing the effect of the elicitors (Açıkgöz 2020; Nazir et al. 2019). Elicitor concentration is an important factor for the elicitation process and the optimum level depends on the plant species (Vasconsuelo and Boland 2007). The studies showed that high concentrations of elicitors trigger the hypersensitive response and leading to cell death (Park et al. 2020). Yeast extract is applied to enhance growth and secondary metabolites production in the cell and hairy root cultures of different plants such as Artemisia annua (Putalun et al. 2007) and Salvia miltiorrhiza (Yan et al. 2006). In this study, the highest fresh and dry cell weight was obtained 6 days after the addition of 100 mg/L yeast extract. The positive effect of using yeast extract on cell and plant growth and increasing biomass has been reported in some studies. Krstić-Milošević et al. (2017) reported that biotic elicitors such as yeast extract improved root growth and biomass production in Gentiana dinarica. Bayraktar et al. (2016) enhanced biomass production of Stevia rebaudiana by yeast extract.
The highest azadirachtin accumulation and production were obtained by using 25 mg/L of yeast extract on 2 and 4 days, maximum mevalonic acid accumulation and production observed 2 days after 50 mg/L of yeast extract addition and the highest amount of squalene accumulation and production were achieved 4 days after application of 50 mg/L of yeast extract. The stimulating effect of yeast extract on the isoflavonoid production in Pueraria candollei hairy root cultures was reported (Udomsuk et al. 2011). Rani et al. (2020) showed that the utility of yeast extract enhances daidzein and genistein production in the cell culture of Pueraria candollei. Sharma and Agrawal (2018) produced 544.60 µg/g DW plumbagin in root cultures of plumbago zeylanica L. by using 100 mg/L of yeast extract. Elicitation of Helianthus tuberosus L. with 0.25 mg/L of yeast extract increased inulin content 1.18-fold compared to control (Ma et al. 2017). Park et al. (2016) observed the highest amount of rosmarinic acid (4.98 mg/g) by using 750 mg/L of yeast extract in Agastache rugosa cell culture. Szopa et al. (2018) increased lignan production in micro-shoot cultures of Schisandra chinensis by elicitation with 5000 mg/L of yeast extract on the first day of the growth period and with 1000 and 3000 mg/L on the 20th day.
Elicitors activate the plant's defense response, causing consecutive cellular and molecular events and activating biosynthetic genes involved in the production of secondary metabolites (Jiao et al. 2016). Various studies have shown the upregulated accumulation of different secondary metabolites by different elicitors such as yeast extract (van der Heijden et al. 1989; Yoon et al. 2000). For example, in the hairy root cultures of Artemisia annua L. application of fungal elicitors enhanced artemisinin production and upregulate expression profile of mevalonate and methylerythritol phosphate biosynthetic genes (Ahlawat et al. 2014). These results are consistent with our findings. In this study, the qRT-PCR analysis showed the maximum relative gene expression of SQS1 and MOF1 obtained by application of 150 mg/L of yeast extract for 4 days and 25 mg/L of yeast extract for 2 days, respectively. In cell cultures of Uncaria tomentosa and Tobernamontana divaricate, the activities of enzymes involved in triterpenoid biosynthesis such as IDI and SS are stimulated by elicitors (Flores-Sánchez et al. 2002; Fulton et al. 1994). Ge and Wu (2005) reported that yeast elicitor stimulated HMGR activity in Salvia miltiorrhiza hairy root cultures. In cell culture of A. rugosa the transcript levels of HPPR under yeast extract treatment were higher than the control (Park et al. 2016).
In conclusion, this is the first study on the optimization of yeast extract and sampling time for the production of azadirachtin, mevalonic acid, and squalene in the cell suspension cultures and investigation of SQS1 and MOF1 genes expression of neem. Central-composite design (CCD) of Response Surface Methodology (RSM) optimized the effects of yeast extract and sampling time for maximum accumulation and production of azadirachtin, mevalonic acid, and squalene. These results showed that advancement in tissue culture techniques and prediction methods could be applied for the production and enhancement of secondary metabolites. Also, neem cell suspension culture is a suitable strategy for the commercial production of secondary metabolites in in vitro conditions. In addition, in the future, methods for predicting optimal conditions for maximum production of secondary metabolites can reduce the production times and costs of new drug compounds.
All data generated or analysed during this study are included in this published article and its supplementary information files.
Abd El-Salam M, Mekky H, El-Naggar EMB, Ghareeb D, El-Demellawy M, El-Fiky F (2015) Hepatoprotective properties and biotransformation of berberine and berberrubine by cell suspension cultures of Dodonaea viscosa and Ocimum basilicum. S Afr J Bot 97:191–195. https://doi.org/10.1016/j.sajb.2015.01.005
Açıkgöz MA (2020) Establishment of cell suspension cultures of Ocimum basilicum L. and enhanced production of pharmaceutical active ingredients. Ind Crop Prod 148:112278. https://doi.org/10.1016/j.indcrop.2020.112278
Ahlawat S, Saxena P, Alam P, Wajid S, Abdin MZ (2014) Modulation of artemisinin biosynthesis by elicitors, inhibitor, and precursor in hairy root cultures of Artemisia annua L. J Plant Interact 9:811–824. https://doi.org/10.1080/17429145.2014.949885
Alzohairy MA (2016) Therapeutics role of Azadirachta indica (neem) and their active constituents in diseases prevention and treatment. Evid-Based Compl Alt 2016:7382506. https://doi.org/10.1155/2016/7382506
Baenas N, García-Viguera C, Moreno DA (2014) Elicitation: a tool for enriching the bioactive composition of foods. Molecules 19:13541–13563. https://doi.org/10.3390/molecules190913541
Bayraktar M, Naziri E, Akgun IH, Karabey F, Ilhan E, Akyol B, Bedir E, Gurel A (2016) Elicitor induced stevioside production, in vitro shoot growth, and biomass accumulation in micropropagated Stevia rebaudiana. Plant Cell Tiss Org 127:289–300. https://doi.org/10.1007/s11240-016-1049-7
Borkotoky S, Banerjee M (2020) A computational prediction of SARS-CoV-2 structural protein inhibitors from Azadirachta indica (Neem). J Biomol Struct Dyn 1–11. https://doi.org/10.1080/07391102.2020.1774419
Cheng Q, He Y, Li G, Liu Y, Gao W, Huang L (2013) Effects of combined elicitors on tanshinone metabolic profiling and SmCPS expression in Salvia miltiorrhiza hairy root cultures. Molecules 18:7473–7485. https://doi.org/10.3390/molecules18077473
Farjaminezhad R, Garoosi G-a (2019) New biological trends on cell and callus growth and azadirachtin production in Azadirachta indica. 3 Biotech 9:309. https://doi.org/10.1007/s13205-019-1836-z
Farjaminezhad R, Garoosi G-a (2020) Establishment of green analytical method for ultrasound-assisted extraction of azadirachtin, mevalonic acid and squalene from cell suspension culture of Azadirachta indica using response surface methodology. Ind Crop Prod 144:111946. https://doi.org/10.1016/j.indcrop.2019.111946
Flores-Sánchez IJ, Ortega-López J, Montes-Horcasitas MdC, Ramos-Valdivia AC (2002) Biosynthesis of sterols and triterpenes in cell suspension cultures of Uncaria tomentosa. Plant Cell Physiol 43:1502–1509. https://doi.org/10.1093/pcp/pcf181
Fulton DC, Kroon PA, Threlfall DR (1994) Enzymological aspects of the redirection of terpenoid biosynthesis in elicitor-treated cultures of Tabernaemontana divaricata. Phytochemistry 35:1183–1186. https://doi.org/10.1016/S0031-9422(00)94818-0
Ge X, Wu J (2005) Tanshinone production and isoprenoid pathways in Salvia miltiorrhiza hairy roots induced by Ag+ and yeast elicitor. Plant Sci 168:487–491. https://doi.org/10.1016/j.plantsci.2004.09.012
Godoy-Hernández G, Vázquez-Flota FA (2006) Growth measurements. In: Loyola-Vargas VM, Vázquez-Flota F (eds) Plant cell culture protocols. Humana Press, Totowa, pp 51–58. https://doi.org/10.1385/1-59259-959-1:051
Gonçalves S, Mansinhos I, Rodríguez-Solana R, Pérez-Santín E, Coelho N, Romano A (2019) Elicitation improves rosmarinic acid content and antioxidant activity in Thymus lotocephalus shoot cultures. Ind Crop Prod 137:214–220. https://doi.org/10.1016/j.indcrop.2019.04.071
Guízar-González C, Monforte-González M, Vázquez-Flota F (2016) Yeast extract induction of sanguinarine biosynthesis is partially dependent on the octadecanoic acid pathway in cell cultures of Argemone mexicana L., the Mexican poppy. Biotechnol Lett 38:1237–1242. https://doi.org/10.1007/s10529-016-2095-2
Jiao J, Gai Q-Y, Wang W, Luo M, Zu Y-G, Fu Y-J, Ma W (2016) Enhanced astragaloside production and transcriptional responses of biosynthetic genes in Astragalus membranaceus hairy root cultures by elicitation with methyl jasmonate. Biochem Eng J 105:339–346. https://doi.org/10.1016/j.bej.2015.10.010
Karalija E, Zeljković SĆ, Parić A (2020) Harvest time-related changes in biomass, phenolics and antioxidant potential in Knautia sarajevensis shoot cultures after elicitation with salicylic acid and yeast. Vitro Cell Dev-Pl 56:177–183. https://doi.org/10.1007/s11627-019-10028-0
Krstić-Milošević D, Janković T, Uzelac B, Vinterhalter D, Vinterhalter B (2017) Effect of elicitors on xanthone accumulation and biomass production in hairy root cultures of Gentiana dinarica. Plant Cell Tiss Org 130:631–640. https://doi.org/10.1007/s11240-017-1252-1
Livak KJ, Schmittgen TD (2001) Analysis of relative gene expression data using real-time quantitative PCR and the 2−ΔΔCT method. Methods 25:402–408. https://doi.org/10.1006/meth.2001.1262
Ma C, Zhou D, Wang H, Han D, Wang Y, Yan X (2017) Elicitation of Jerusalem artichoke (Helianthus tuberosus L.) cell suspension culture for enhancement of inulin production and altered degree of polymerisation. J Sci Food Agric 97:88–94. https://doi.org/10.1002/jsfa.7686
Mäkelä M (2017) Experimental design and response surface methodology in energy applications: a tutorial review. Energ Convers Manage 151:630–640. https://doi.org/10.1016/j.enconman.2017.09.021
Mbah AU, Udeinya IJ, Shu EN, Chijioke CP, Nubila T, Udeinya F, Muobuike A, Mmuobieri A, Obioma MS (2007) Fractionated neem leaf extract is safe and increases cd4+ cell levels in HIV/AIDS patients. Am J Ther 14:369–374. https://doi.org/10.1097/MJT.0b013e3180a72199
Menezes Maciel Bindes M, Hespanhol Miranda Reis M, Luiz Cardoso V, Boffito DC (2019) Ultrasound-assisted extraction of bioactive compounds from green tea leaves and clarification with natural coagulants (chitosan and Moringa oleífera seeds). Ultrason Sonochem 51:111–119. https://doi.org/10.1016/j.ultsonch.2018.10.014
Murthy HN, Lee E-J, Paek K-Y (2014) Production of secondary metabolites from cell and organ cultures: strategies and approaches for biomass improvement and metabolite accumulation. Plant Cell Tiss Organ Cult 118:1–16. https://doi.org/10.1007/s11240-014-0467-7
Namdeo A (2007) Plant cell elicitation for production of secondary metabolites: a review. Pharmacogn Rev 1:69–79
Nazir M, Tungmunnithum D, Bose S, Drouet S, Garros L, Giglioli-Guivarc'h N, Abbasi BH, Hano C, (2019) Differential production of phenylpropanoid metabolites in callus cultures of Ocimum basilicum L. with distinct in vitro antioxidant activities and in vivo protective effects against UV stress. J Agric Food Chem 67:1847–1859. https://doi.org/10.1021/acs.jafc.8b05647
Park WT, Arasu MV, Al-Dhabi NA, Yeo SK, Jeon J, Park JS, Lee SY, Park SU (2016) Yeast extract and silver nitrate induce the expression of phenylpropanoid biosynthetic genes and induce the accumulation of rosmarinic acid in Agastache rugosa cell culture. Molecules 21:426. https://doi.org/10.3390/molecules21040426
Park YJ, Kim JK, Park SU (2020) Yeast extract improved biosynthesis of astragalosides in hairy root cultures of Astragalus membranaceus. Prep Biochem Biotech 1–8. https://doi.org/10.1080/10826068.2020.1830415
Putalun W, Luealon W, De-Eknamkul W, Tanaka H, Shoyama Y (2007) Improvement of artemisinin production by chitosan in hairy root cultures of Artemisia annua L. Biotechnol Lett 29:1143–1146. https://doi.org/10.1007/s10529-007-9368-8
Rafiq M, Dahot MU (2010) Callus and azadirachtin related limonoids production through in vitro culture of neem (Azadirachta indica A. Juss). Afr J Biotechnol 9
Ramachandra Rao S, Ravishankar GA (2002) Plant cell cultures: chemical factories of secondary metabolites. Biotechnol Adv 20:101–153. https://doi.org/10.1016/S0734-9750(02)00007-1
Ramirez-Estrada K, Vidal-Limon H, Hidalgo D, Moyano E, Golenioswki M, Cusidó RM, Palazon J (2016) Elicitation, an effective strategy for the biotechnological production of bioactive high-added value compounds in plant cell factories. Molecules 21:182. https://doi.org/10.3390/molecules21020182
Rani D, Meelaph T, De-Eknamkul W, Vimolmangkang S (2020) Yeast extract elicited isoflavonoid accumulation and biosynthetic gene expression in Pueraria candollei var. mirifica cell cultures. Plant Cell Tiss Organ Cult 141:661–667. https://doi.org/10.1007/s11240-020-01809-2
Roy S, Bhattacharyya P (2020) Possible role of traditional medicinal plant Neem (Azadirachta indica) for the management of COVID-19 infection. Int J Res Pharm Sci 11:122–125. https://doi.org/10.26452/ijrps.v11iSPL1.2256
Sarkar L, Putchala RK, Safiriyu AA, Das Sarma J (2020) Azadirachta indica A. Juss ameliorates mouse hepatitis virus-induced neuroinflammatory demyelination by modulating cell-to-cell fusion in an experimental animal model of multiple sclerosis. Front Cell Neurosci 14. https://doi.org/10.3389/fncel.2020.00116
Sharma U, Agrawal V (2018) In vitro shoot regeneration and enhanced synthesis of plumbagin in root callus of Plumbago zeylanica L. an important medicinal herb. Vitro Cell Dev-Pl 54:423–435. https://doi.org/10.1007/s11627-018-9889-y
Singh T, Sharma U, Agrawal V (2020) Isolation and optimization of plumbagin production in root callus of Plumbago zeylanica L. augmented with chitosan and yeast extract. Ind Crop Prod 151:112446. https://doi.org/10.1016/j.indcrop.2020.112446
Szopa A, Kokotkiewicz A, Król A, Luczkiewicz M, Ekiert H (2018) Improved production of dibenzocyclooctadiene lignans in the elicited microshoot cultures of Schisandra chinensis (Chinese magnolia vine). Appl Microbiol Biot 102:945–959. https://doi.org/10.1007/s00253-017-8640-7
Thomford NE, Senthebane DA, Rowe A, Munro D, Seele P, Maroyi A, Dzobo K (2018) Natural products for drug discovery in the 21st century: innovations for novel drug discovery. Int J Mol Sci 19:1578. https://doi.org/10.3390/ijms19061578
CAS Article PubMed Central Google Scholar
Tiwari V, Darmani NA, Yue BYJT, Shukla D (2010) In vitro antiviral activity of neem (Azardirachta indica L.) bark extract against herpes simplex virus type-1 infection. Phytother Res 24:1132–1140. https://doi.org/10.1002/ptr.3085
Udomsuk L, Jarukamjorn K, Tanaka H, Putalun W (2011) Improved isoflavonoid production in Pueraria candollei hairy root cultures using elicitation. Biotechnol Lett 33:369–374. https://doi.org/10.1007/s10529-010-0417-3
van der Heijden R, Threlfall DR, Verpoorte R, Whitehead IM (1989) Regulation and enzymology of pentacyclic triterpenoid phytoalexin biosynthesis in cell suspension cultures of Tabernaemontana divaricata. Phytochemistry 28:2981–2988. https://doi.org/10.1016/0031-9422(89)80264-X
Vasconsuelo A, Boland R (2007) Molecular aspects of the early stages of elicitation of secondary metabolites in plants. Plant Sci 172:861–875. https://doi.org/10.1016/j.plantsci.2007.01.006
Yan Q, Shi M, Ng J, Wu JY (2006) Elicitor-induced rosmarinic acid accumulation and secondary metabolism enzyme activities in Salvia miltiorrhiza hairy roots. Plant Sci 170:853–858. https://doi.org/10.1016/j.plantsci.2005.12.004
Yoon HJ, Kim HK, Ma C-J, Huh H (2000) Induced accumulation of triterpenoids in Scutellaria baicalensis suspension cultures using a yeast elicitor. Biotechnol Lett 22:1071–1075. https://doi.org/10.1023/A:1005610400511
This work was supported by Imam Khomeini International University (IKIU). Special thanks to Dr. Jafar Ahmadi for the technical support.
Department of Biotechnology, Faculty of Agriculture and Natural Resources, Imam Khomeini International University (IKIU), P. O. Box 288, 34149-16818, Qazvin, Islamic Republic of Iran
Reza Farjaminezhad & Ghasemali Garoosi
Reza Farjaminezhad
Ghasemali Garoosi
RF wrote the manuscript and carried out experiments and data analysis. GG supervised the experiment and conducted an English revision. Both authors approved its final version.
Correspondence to Ghasemali Garoosi.
This article does not contain any studies with human participants or animals performed by any of authors.
The authors declare no competing interest.
Additional file 1
: Table S1 Effect of different concentrations of yeast extract and sampling times on measured indices of cell suspension culture of neem.
Farjaminezhad, R., Garoosi, G. Improvement and prediction of secondary metabolites production under yeast extract elicitation of Azadirachta indica cell suspension culture using response surface methodology. AMB Expr 11, 43 (2021). https://doi.org/10.1186/s13568-021-01203-x
Azadiractin
Callus induction
Medicinal plant | CommonCrawl |
Guy Hirsch
Guy Hirsch (20 September 1915 – 4 August 1993)[1] was a Belgian mathematician and philosopher of mathematics, who worked on algebraic topology and epistemology of mathematics.
Guy Hirsch
Born(1915-09-20)20 September 1915
London
Died4 August 1993(1993-08-04) (aged 77)
Brussels
Alma materUniversité libre de Bruxelles
Known foralgebraic topology
Scientific career
FieldsMathematics
InstitutionsUniversité libre de Bruxelles
Doctoral advisorAlfred Errera
He became a member of the Royal Flemish Academy of Belgium for Science and the Arts in 1973.
He is known for the Leray–Hirsch theorem, a basic result on the algebraic topology of fiber bundles that he proved independently of Jean Leray in the late 1940s.
References
1. "In Memoriam Guy Hirsch (1915–1993)". Historia Mathematica. 22: 345–346. 1995. doi:10.1006/hmat.1995.1029.
External links
• Guy Hirsch at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
Paterson's worms
Paterson's worms are a family of cellular automata devised in 1971 by Mike Paterson and John Horton Conway to model the behaviour and feeding patterns of certain prehistoric worms. In the model, a worm moves between points on a triangular grid along line segments, representing food. Its turnings are determined by the configuration of eaten and uneaten line segments adjacent to the point at which the worm currently is. Despite being governed by simple rules the behaviour of the worms can be extremely complex, and the ultimate fate of one variant is still unknown.
The worms were studied in the early 1970s by Paterson, Conway and Michael Beeler, described by Beeler in June 1973,[1] and presented in November 1973 in Martin Gardner's "Mathematical Games" column in Scientific American.[2]
Electronic Arts' 1983 game Worms? is an interactive implementation of Paterson's worms, where each time a worm has to turn in a way that it lacks a rule for, it stops and lets the user choose a direction, which sets that rule for that worm.
History
Paterson's worms are an attempt to simulate the behaviour of prehistoric worms. These creatures fed upon sediment at the bottom of ponds and avoided retracing paths they had already travelled because food would be scarce there but, because food occurred in patches, it was in the worm's interest to stay near previous trails. Different species of worm had different innate rules regarding how close to travelled paths to stay, when to turn, and how sharp a turn to make.[1] In 1969 Raup and Seilacher created computer simulations of the fossilized worm trails, and these simulations inspired Paterson and Conway to develop a simple set of rules to study idealized worms on regular grids.[3]
Conway's original model was a worm on an orthogonal grid but this produced only three different species of worm, all with rather uninteresting behaviour. Paterson considered worms on a triangular grid.[1] Paterson's worms were described by Beeler in a Massachusetts Institute of Technology AI Memo (#) and were presented in November 1973 in Martin Gardner's "Mathematical Games" column in Scientific American,[2] and later reprinted in Gardner 1986.[4] These simulations differed in approach from other cellular automata developed around the same time, which focused on cells and the relationships between them.[5] Simple computer models such as these are too abstract to accurately describe the behaviour of the real creatures, but they do demonstrate that even very simple rules can give rise to patterns resembling their tracks.[6]
Rules
The worm starts at some point of an infinite triangular grid. It starts moving along one of the six gridlines that meet at each point[6] and, once it has travelled one unit of distance, it arrives at a new point. The worm then decides, based on the distribution of traversed and untraversed gridlines, what direction it will take. The directions are relative to the worm's point of view. If the worm has not encountered this exact distribution before it may leave along any untraversed gridline. From then on, if it encounters that distribution again, it must move in the same way. If there are no untraversed gridlines available, the worm dies and the simulation ends.[1]
Discussion
There are many different types of worm depending on which direction they turn when encountering a new type of intersection. The different varieties of worm can be classified systematically by assigning every direction a number and listing the choice made every time a new type of intersection is encountered.[7]
The six directions are numbered as follows:
So direction 0 indicates the worm continues to travel straight ahead, direction 1 indicates the worm will make a right turn of 60° and similarly for the other directions. The worm cannot travel in direction 3 because that is the gridline it has just traversed. Thus a worm with rule {1,0,5,1} decides to travel in direction 1 the first time it has to make a choice, in direction 0 the next time it has to make a choice and so on. If there is only one available gridline, the worm has no choice but to take it and this is usually not explicitly listed.
A worm whose ruleset begins with 0 continues in a straight line forever. This is a trivial case, so it is usually stipulated that the worm must turn when it encounters a point with only uneaten gridlines. Furthermore, to avoid mirror-image symmetrical duplicates, the worm's first turn must be a right hand turn.[1] A worm dies if it returns to its origin a third time, because there are then no untraversed edges available. Only the origin can be lethal to the worm.[8]
There are 1,296 possible combinations of worm rules.[4] This can be seen by the following argument:
1. If the worm encounters a node with no eaten segments, other than the one it has just eaten, it can either make a sharp turn or a gentle one. This is the situation shown in the figure above. Since the initial choice of left or right produce combinations that simply mirrors of each other, they are not effectively different.
2. If it encounters a node with one eaten segment, it can leave along any of the remaining four. Only the worm's first return to the origin has this character.
3. For two eaten segments, the location of the eaten segments is important. The only type of two-segment intersections that can exist is that produced by the first rule, for which there are four distinct approach directions, each of which offers a choice of three departure directions. This allows for 81 different alternatives in choosing rules.
4. If the worm returns to the origin, it will encounter three eaten segments and must choose between the two remaining uneaten ones regardless of their distribution.
5. For four eaten segments, there is only one uneaten segment left and the worm must take it.
There are therefore 2×4×81×2x1=1,296 different combinations of rules. Many of these are mirror-image duplicates of others, and others die before having to make all the choices in their ruleset, leaving 411 distinct species (412 if the infinite straight-line worm is included).[8] 336 of these species eventually die. 73 patterns exhibit infinite behaviour, that is, they settle into a repeating pattern that does not return to the origin. A further two are strongly believed to be infinite and one remains unsolved. Eleven of the rules exhibit complicated behaviour. They do not die even after many billions of iterations, nor do they adopt an obviously infinite pattern. Their ultimate fate was unknown until 2003 when Benjamin Chaffin developed new methods of solving them. After many hours of computer time, nine of the eleven rules were solved, leaving the worms with rules {1,0,4,2,0,2,0} and {1,0,4,2,0,1,5}.[7] The first of these was solved by Tomas Rokicki, who determined that it halts after 57 trillion (5.7×1013) timesteps, leaving only {1,0,4,2,0,1,5} unsolved. According to Rokicki, the worm is still active after 5.2×1019 timesteps. He used an algorithm based on Bill Gosper's Hashlife to simulate the worms at extraordinary speeds.[8] This behaviour is considerably more complex than the related rectangular grid worm, which has a longest path of only 16 segments.[6]
It is possible for two different species of worm to produce the same path, though they do not necessarily traverse it in the same order.[1] The most common path is also the shortest: the seven point MOT test/fallout shelter symbol.[4] One example of this path is shown in the animated figure above. In total there are 299 different paths, and 209 of these are produced by just one species.[1]
See also
• Busy beaver – Theory of computation
• Langton's ant – Two-dimensional Turing machine with emergent behavior
• Turing machine – Computation model defining an abstract machine
• Turmite – Turing machine on a two-dimensional grid
References
1. Beeler, Michael (June 1973). "Paterson's Worm". Artificial Intelligence Memo. No. 290. Massachusetts Institute of Technology. hdl:1721.1/6210.
2. Gardner, Martin (November 1973). "Mathematical Games: Fantastic patterns traced by programmed 'worms'". Scientific American. 229 (5): 116–123. doi:10.1038/scientificamerican1173-116.
3. "Paterson's Worms". WolframMathworld. Retrieved 2008-08-15.
4. Gardner, Martin (1986), Knotted doughnuts and other mathematical entertainments, W. H. Freeman, Bibcode:1986kdom.book.....G, ISBN 978-0-7167-1799-7, MR 0857289
5. Parikka, Jussi (2007). Digital Contagions: a Media Archaeology of Computer Viruses. New York: Peter Lang Publishing. p. 234. ISBN 978-1-4331-0093-2.
6. Hayes, Brian (September–October 2003). "In Search of the Optimal Scumsucking Bottomfeeder". American Scientist. 95 (5): 392–396. doi:10.1511/2003.5.392.
7. Pegg Jr., Ed (October 27, 2003). "Math Games: Paterson's Worms Revisited". MAA Online. Archived from the original on 2004-03-23. Retrieved 2008-08-15.
8. Chaffin, Benjamin. "Paterson's Worms". Archived from the original on June 7, 2011.
External links
• Rokicki's worm page
• Sven's worm page
| Wikipedia |
Przemo
A list of Multiple Zeta values of depth three
sequences-and-series zeta-functions asked Jun 15 '17 at 17:03
math.stackexchange.com
On a certain integral that involves a product of powers of logarithms.
integration special-functions zeta-functions euler-sums asked Dec 15 '17 at 16:16
What is $\sum\limits_{m=1}^\infty \frac{H_m}{m^6} \cdot (\frac{1}{2})^m$?
sequences-and-series special-functions euler-sums asked Dec 21 '17 at 12:35
Gauge transformation of differential equations.
differential-equations asked Oct 12 '18 at 16:02
Is there a closed form expression for $1 + x + x^{4} + x^{9}+x^{16}+x^{25} +..+x^{k^2}$?
sequences-and-series closed-form asked Oct 9 '13 at 10:37
Limit behavior of a definite integral that depends on a parameter.
integration limits asked Jul 30 '15 at 17:11
Is $\zeta(6,2)$ the lowest multiple zeta function that cannot be expressed in terms of single zeta values?
harmonic-numbers zeta-functions asked May 24 '17 at 17:50
An integral involving powers of logarithms and a dilogarithm.
integration zeta-functions asked May 30 '17 at 16:54
Yet another multivariable integral over a simplex
integration asked Oct 28 '14 at 15:59
Calculating alternating Euler sums of odd powers
Is there a closed form for $\sum_{n=0}^{\infty}{2^{n+1}\over {2n \choose n}}\cdot\left({2n-1\over 2n+1}\right)^2?$
Infinite Series $\sum_{n=1}^\infty\frac{H_n}{n^22^n}$
Finding indefinite integral $\int{ \mathrm dx\over \sqrt{\sin^3 x+\sin (x+\alpha)}}$
Looking for a closed form for a ${}_4 F_3\left(\ldots,1\right)$
Infinite product for Zeta[2]?
mathematica.stackexchange.com | CommonCrawl |
Conway–Maxwell–Poisson distribution
In probability theory and statistics, the Conway–Maxwell–Poisson (CMP or COM–Poisson) distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family,[1] has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.[2]
Conway–Maxwell–Poisson
Probability mass function
Cumulative distribution function
Parameters $\lambda >0,\nu \geq 0$
Support $x\in \{0,1,2,\dots \}$
PMF ${\frac {\lambda ^{x}}{(x!)^{\nu }}}{\frac {1}{Z(\lambda ,\nu )}}$
CDF $\sum _{i=0}^{x}\Pr(X=i)$
Mean $\sum _{j=0}^{\infty }{\frac {j\lambda ^{j}}{(j!)^{\nu }Z(\lambda ,\nu )}}$
Median No closed form
Mode See text
Variance $\sum _{j=0}^{\infty }{\frac {j^{2}\lambda ^{j}}{(j!)^{\nu }Z(\lambda ,\nu )}}-\operatorname {mean} ^{2}$
Skewness Not listed
Ex. kurtosis Not listed
Entropy Not listed
MGF ${\frac {Z(e^{t}\lambda ,\nu )}{Z(\lambda ,\nu )}}$
CF ${\frac {Z(e^{it}\lambda ,\nu )}{Z(\lambda ,\nu )}}$
Background
The CMP distribution was originally proposed by Conway and Maxwell in 1962[3] as a solution to handling queueing systems with state-dependent service rates. The CMP distribution was introduced into the statistics literature by Boatwright et al. 2003 [4] and Shmueli et al. (2005).[2] The first detailed investigation into the probabilistic and statistical properties of the distribution was published by Shmueli et al. (2005).[2] Some theoretical probability results of COM-Poisson distribution is studied and reviewed by Li et al. (2019),[5] especially the characterizations of COM-Poisson distribution.
Probability mass function and basic properties
The CMP distribution is defined to be the distribution with probability mass function
$P(X=x)=f(x;\lambda ,\nu )={\frac {\lambda ^{x}}{(x!)^{\nu }}}{\frac {1}{Z(\lambda ,\nu )}}.$
where :
$Z(\lambda ,\nu )=\sum _{j=0}^{\infty }{\frac {\lambda ^{j}}{(j!)^{\nu }}}.$
The function $Z(\lambda ,\nu )$ serves as a normalization constant so the probability mass function sums to one. Note that $Z(\lambda ,\nu )$ does not have a closed form.
The domain of admissible parameters is $\lambda ,\nu >0$, and $0<\lambda <1$, $\nu =0$.
The additional parameter $\nu $ which does not appear in the Poisson distribution allows for adjustment of the rate of decay. This rate of decay is a non-linear decrease in ratios of successive probabilities, specifically
${\frac {P(X=x-1)}{P(X=x)}}={\frac {x^{\nu }}{\lambda }}.$
When $\nu =1$, the CMP distribution becomes the standard Poisson distribution and as $\nu \to \infty $, the distribution approaches a Bernoulli distribution with parameter $\lambda /(1+\lambda )$. When $\nu =0$ the CMP distribution reduces to a geometric distribution with probability of success $1-\lambda $ provided $\lambda <1$.[2]
For the CMP distribution, moments can be found through the recursive formula [2]
$\operatorname {E} [X^{r+1}]={\begin{cases}\lambda \,\operatorname {E} [X+1]^{1-\nu }&{\text{if }}r=0\\\lambda \,{\frac {d}{d\lambda }}\operatorname {E} [X^{r}]+\operatorname {E} [X]\operatorname {E} [X^{r}]&{\text{if }}r>0.\\\end{cases}}$
Cumulative distribution function
For general $\nu $, there does not exist a closed form formula for the cumulative distribution function of $X\sim \mathrm {CMP} (\lambda ,\nu )$. If $\nu \geq 1$ is an integer, we can, however, obtain the following formula in terms of the generalized hypergeometric function:[6]
$F(n)=P(X\leq n)=1-{\frac {_{1}F_{\nu -1}(;n+2,\ldots ,n+2;\lambda )}{{\{(n+1)!\}^{\nu -1}}_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )}}.$
The normalizing constant
Many important summary statistics, such as moments and cumulants, of the CMP distribution can be expressed in terms of the normalizing constant $Z(\lambda ,\nu )$.[2][7] Indeed, The probability generating function is $\operatorname {E} s^{X}=Z(s\lambda ,\nu )/Z(\lambda ,\nu )$, and the mean and variance are given by
$\operatorname {E} X=\lambda {\frac {d}{d\lambda }}{\big \{}\ln(Z(\lambda ,\nu )){\big \}},$
$\operatorname {var} (X)=\lambda {\frac {d}{d\lambda }}\operatorname {E} X.$
The cumulant generating function is
$g(t)=\ln(\operatorname {E} [e^{tX}])=\ln(Z(\lambda e^{t},\nu ))-\ln(Z(\lambda ,\nu )),$
and the cumulants are given by
$\kappa _{n}=g^{(n)}(0)={\frac {\partial ^{n}}{\partial t^{n}}}\ln(Z(\lambda e^{t},\nu )){\bigg |}_{t=0},\quad n\geq 1.$
Whilst the normalizing constant $Z(\lambda ,\nu )=\sum _{i=0}^{\infty }{\frac {\lambda ^{i}}{(i!)^{\nu }}}$ does not in general have a closed form, there are some noteworthy special cases:
• $Z(\lambda ,1)=\mathrm {e} ^{\lambda }$
• $Z(\lambda ,0)=(1-\lambda )^{-1}$
• $\lim _{\nu \rightarrow \infty }Z(\lambda ,\nu )=1+\lambda $
• $Z(\lambda ,2)=I_{0}(2{\sqrt {\lambda }})$, where $I_{0}(x)=\sum _{k=0}^{\infty }{\frac {1}{(k!)^{2}}}{\big (}{\frac {x}{2}}{\big )}^{2k}$ is a modified Bessel function of the first kind.[7]
• For integer $\nu $, the normalizing constant can expressed [6] as a generalized hypergeometric function: $Z(\lambda ,\nu )=_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )$.
Because the normalizing constant does not in general have a closed form, the following asymptotic expansion is of interest. Fix $\nu >0$. Then, as $\lambda \rightarrow \infty $,[8]
$Z(\lambda ,\nu )={\frac {\exp \left\{\nu \lambda ^{1/\nu }\right\}}{\lambda ^{(\nu -1)/2\nu }(2\pi )^{(\nu -1)/2}{\sqrt {\nu }}}}\sum _{k=0}^{\infty }c_{k}{\big (}\nu \lambda ^{1/\nu }{\big )}^{-k},$
where the $c_{j}$ are uniquely determined by the expansion
$\left(\Gamma (t+1)\right)^{-\nu }={\frac {\nu ^{\nu (t+1/2)}}{\left(2\pi \right)^{(\nu -1)/2}}}\sum _{j=0}^{\infty }{\frac {c_{j}}{\Gamma (\nu t+(1+\nu )/2+j)}}.$
In particular, $c_{0}=1$, $c_{1}={\frac {\nu ^{2}-1}{24}}$, $c_{2}={\frac {\nu ^{2}-1}{1152}}\left(\nu ^{2}+23\right)$. Further coefficients are given in.[8]
Moments, cumulants and related results
For general values of $\nu $, there does not exist closed form formulas for the mean, variance and moments of the CMP distribution. We do, however, have the following neat formula.[7] Let $(j)_{r}=j(j-1)\cdots (j-r+1)$ denote the falling factorial. Let $X\sim \mathrm {CMP} (\lambda ,\nu )$, $\lambda ,\nu >0$. Then
$\operatorname {E} [((X)_{r})^{\nu }]=\lambda ^{r},$
for $r\in \mathbb {N} $.
Since in general closed form formulas are not available for moments and cumulants of the CMP distribution, the following asymptotic formulas are of interest. Let $X\sim \mathrm {CMP} (\lambda ,\nu )$, where $\nu >0$. Denote the skewness $\gamma _{1}={\frac {\kappa _{3}}{\sigma ^{3}}}$ and excess kurtosis $\gamma _{2}={\frac {\kappa _{4}}{\sigma ^{4}}}$, where $\sigma ^{2}=\mathrm {Var} (X)$. Then, as $\lambda \rightarrow \infty $,[8]
$\operatorname {E} X=\lambda ^{1/\nu }\left(1-{\frac {\nu -1}{2\nu }}\lambda ^{-1/\nu }-{\frac {\nu ^{2}-1}{24\nu ^{2}}}\lambda ^{-2/\nu }-{\frac {\nu ^{2}-1}{24\nu ^{3}}}\lambda ^{-3/\nu }+{\mathcal {O}}(\lambda ^{-4/\nu })\right),$
$\mathrm {Var} (X)={\frac {\lambda ^{1/\nu }}{\nu }}{\bigg (}1+{\frac {\nu ^{2}-1}{24\nu ^{2}}}\lambda ^{-2/\nu }+{\frac {\nu ^{2}-1}{12\nu ^{3}}}\lambda ^{-3/\nu }+{\mathcal {O}}(\lambda ^{-4/\nu }){\bigg )},$
$\kappa _{n}={\frac {\lambda ^{1/\nu }}{\nu ^{n-1}}}{\bigg (}1+{\frac {(-1)^{n}(\nu ^{2}-1)}{24\nu ^{2}}}\lambda ^{-2/\nu }+{\frac {(-2)^{n}(\nu ^{2}-1)}{48\nu ^{3}}}\lambda ^{-3/\nu }+{\mathcal {O}}(\lambda ^{-4/\nu }){\bigg )},$
$\gamma _{1}={\frac {\lambda ^{-1/2\nu }}{\sqrt {\nu }}}{\bigg (}1-{\frac {5(\nu ^{2}-1)}{48\nu ^{2}}}\lambda ^{-2/\nu }-{\frac {7(\nu ^{2}-1)}{24\nu ^{3}}}\lambda ^{-3/\nu }+{\mathcal {O}}(\lambda ^{-4/\nu }){\bigg )},$
$\gamma _{2}={\frac {\lambda ^{-1/\nu }}{\nu }}{\bigg (}1-{\frac {(\nu ^{2}-1)}{24\nu ^{2}}}\lambda ^{-2/\nu }+{\frac {(\nu ^{2}-1)}{6\nu ^{3}}}\lambda ^{-3/\nu }+{\mathcal {O}}(\lambda ^{-4/\nu }){\bigg )},$
$\operatorname {E} [X^{n}]=\lambda ^{n/\nu }{\bigg (}1+{\frac {n(n-\nu )}{2\nu }}\lambda ^{-1/\nu }+a_{2}\lambda ^{-2/\nu }+{\mathcal {O}}(\lambda ^{-3/\nu }){\bigg )},$
where
$a_{2}=-{\frac {n(\nu -1)(6n\nu ^{2}-3n\nu -15n+4\nu +10)}{24\nu ^{2}}}+{\frac {1}{\nu ^{2}}}{\bigg \{}{\binom {n}{3}}+3{\binom {n}{4}}{\bigg \}}.$
The asymptotic series for $\kappa _{n}$ holds for all $n\geq 2$, and $\kappa _{1}=\operatorname {E} X$.
Moments for the case of integer $\nu $
When $\nu $ is an integer explicit formulas for moments can be obtained. The case $\nu =1$ corresponds to the Poisson distribution. Suppose now that $\nu =2$. For $m\in \mathbb {N} $,[7]
$\operatorname {E} [(X)_{m}]={\frac {\lambda ^{m/2}I_{m}(2{\sqrt {\lambda }})}{I_{0}(2{\sqrt {\lambda }})}},$
where $I_{r}(x)$ is the modified Bessel function of the first kind.
Using the connecting formula for moments and factorial moments gives
$\operatorname {E} X^{m}=\sum _{k=1}^{m}\left\{{m \atop k}\right\}{\frac {\lambda ^{k/2}I_{k}(2{\sqrt {\lambda }})}{I_{0}(2{\sqrt {\lambda }})}}.$
In particular, the mean of $X$ is given by
$\operatorname {E} X={\frac {{\sqrt {\lambda }}I_{1}(2{\sqrt {\lambda }})}{I_{0}(2{\sqrt {\lambda }})}}.$
Also, since $\operatorname {E} X^{2}=\lambda $, the variance is given by
$\mathrm {Var} (X)=\lambda \left(1-{\frac {I_{1}(2{\sqrt {\lambda }})^{2}}{I_{0}(2{\sqrt {\lambda }})^{2}}}\right).$
Suppose now that $\nu \geq 1$ is an integer. Then [6]
$\operatorname {E} [(X)_{m}]={\frac {\lambda ^{m}}{(m!)^{\nu -1}}}{\frac {_{0}F_{\nu -1}(;m+1,\ldots ,m+1;\lambda )}{_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )}}.$
In particular,
$\operatorname {E} [X]=\lambda {\frac {_{0}F_{\nu -1}(;2,\ldots ,2;\lambda )}{_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )}},$
and
$\mathrm {Var} (X)={\frac {\lambda ^{2}}{2^{\nu -1}}}{\frac {_{0}F_{\nu -1}(;3,\ldots ,3;\lambda )}{_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )}}+\operatorname {E} [X]-(\operatorname {E} [X])^{2}.$
Median, mode and mean deviation
Let $X\sim \mathrm {CMP} (\lambda ,\nu )$. Then the mode of $X$ is $\lfloor \lambda ^{1/\nu }\rfloor $ if $\lambda ^{1/\nu }<m$ is not an integer. Otherwise, the modes of $X$ are $\lambda ^{1/\nu }$ and $\lambda ^{1/\nu }-1$.[7]
The mean deviation of $X^{\nu }$ about its mean $\lambda $ is given by [7]
$\operatorname {E} |X^{\nu }-\lambda |=2Z(\lambda ,\nu )^{-1}{\frac {\lambda ^{\lfloor \lambda ^{1/\nu }\rfloor +1}}{\lfloor \lambda ^{1/\nu }\rfloor !}}.$ !}}.}
No explicit formula is known for the median of $X$, but the following asymptotic result is available.[7] Let $m$ be the median of $X\sim {\mbox{CMP}}(\lambda ,\nu )$. Then
$m=\lambda ^{1/\nu }+{\mathcal {O}}\left(\lambda ^{1/2\nu }\right),$
as $\lambda \rightarrow \infty $.
Stein characterisation
Let $X\sim {\mbox{CMP}}(\lambda ,\nu )$, and suppose that $f:\mathbb {Z} ^{+}\mapsto \mathbb {R} $ is such that $\operatorname {E} |f(X+1)|<\infty $ and $\operatorname {E} |X^{\nu }f(X)|<\infty $. Then
$\operatorname {E} [\lambda f(X+1)-X^{\nu }f(X)]=0.$
Conversely, suppose now that $W$ is a real-valued random variable supported on $\mathbb {Z} ^{+}$ such that $\operatorname {E} [\lambda f(W+1)-W^{\nu }f(W)]=0$ for all bounded $f:\mathbb {Z} ^{+}\mapsto \mathbb {R} $. Then $W\sim {\mbox{CMP}}(\lambda ,\nu )$.[7]
Use as a limiting distribution
Let $Y_{n}$ have the Conway–Maxwell–binomial distribution with parameters $n$, $p=\lambda /n^{\nu }$ and $\nu $. Fix $\lambda >0$ and $\nu >0$. Then, $Y_{n}$ converges in distribution to the $\mathrm {CMP} (\lambda ,\nu )$ distribution as $n\rightarrow \infty $.[7] This result generalises the classical Poisson approximation of the binomial distribution. More generally, the CMP distribution arises as a limiting distribution of Conway–Maxwell–Poisson binomial distribution.[7] Apart from the fact that COM-binomial approximates to COM-Poisson, Zhang et al. (2018)[9] illustrates that COM-negative binomial distribution with probability mass function
$\mathrm {P} (X=k)={\frac {{{({\frac {\Gamma (r+k)}{k!\Gamma (r)}})}^{\nu }}{p^{k}}{{(1-p)}^{r}}}{\sum \limits _{i=0}^{\infty }{{({\frac {\Gamma (r+i)}{i!\Gamma (r)}})}^{\nu }}{p^{i}}{{(1-p)}^{r}}}}={{\left({\frac {\Gamma (r+k)}{k!\Gamma (r)}}\right)}^{\nu }}{{p^{k}}{{(1-p)}^{r}}}{\frac {1}{C(r,\nu ,p)}},\quad (k=0,1,2,\ldots ),$
convergents to a limiting distribution which is the COM-Poisson, as ${r\to +\infty }$ .
Related distributions
• $X\sim \operatorname {CMP} (\lambda ,1)$, then $X$ follows the Poisson distribution with parameter $\lambda $.
• Suppose $\lambda <1$. Then if $X\sim \mathrm {CMP} (\lambda ,0)$, we have that $X$ follows the geometric distribution with probability mass function $P(X=k)=\lambda ^{k}(1-\lambda )$, $k\geq 0$.
• The sequence of random variable $X_{\nu }\sim \mathrm {CMP} (\lambda ,\nu )$ converges in distribution as $\nu \rightarrow \infty $ to the Bernoulli distribution with mean $\lambda (1+\lambda )^{-1}$.
Parameter estimation
There are a few methods of estimating the parameters of the CMP distribution from the data. Two methods will be discussed: weighted least squares and maximum likelihood. The weighted least squares approach is simple and efficient but lacks precision. Maximum likelihood, on the other hand, is precise, but is more complex and computationally intensive.
Weighted least squares
The weighted least squares provides a simple, efficient method to derive rough estimates of the parameters of the CMP distribution and determine if the distribution would be an appropriate model. Following the use of this method, an alternative method should be employed to compute more accurate estimates of the parameters if the model is deemed appropriate.
This method uses the relationship of successive probabilities as discussed above. By taking logarithms of both sides of this equation, the following linear relationship arises
$\log {\frac {p_{x-1}}{p_{x}}}=-\log \lambda +\nu \log x$
where $p_{x}$ denotes $\Pr(X=x)$. When estimating the parameters, the probabilities can be replaced by the relative frequencies of $x$ and $x-1$. To determine if the CMP distribution is an appropriate model, these values should be plotted against $\log x$ for all ratios without zero counts. If the data appear to be linear, then the model is likely to be a good fit.
Once the appropriateness of the model is determined, the parameters can be estimated by fitting a regression of $\log({\hat {p}}_{x-1}/{\hat {p}}_{x})$ on $\log x$. However, the basic assumption of homoscedasticity is violated, so a weighted least squares regression must be used. The inverse weight matrix will have the variances of each ratio on the diagonal with the one-step covariances on the first off-diagonal, both given below.
$\operatorname {var} \left[\log {\frac {{\hat {p}}_{x-1}}{{\hat {p}}_{x}}}\right]\approx {\frac {1}{np_{x}}}+{\frac {1}{np_{x-1}}}$
${\text{cov}}\left(\log {\frac {{\hat {p}}_{x-1}}{{\hat {p}}_{x}}},\log {\frac {{\hat {p}}_{x}}{{\hat {p}}_{x+1}}}\right)\approx -{\frac {1}{np_{x}}}$
Maximum likelihood
The CMP likelihood function is
${\mathcal {L}}(\lambda ,\nu \mid x_{1},\dots ,x_{n})=\lambda ^{S_{1}}\exp(-\nu S_{2})Z^{-n}(\lambda ,\nu )$
where $S_{1}=\sum _{i=1}^{n}x_{i}$ and $S_{2}=\sum _{i=1}^{n}\log x_{i}!$. Maximizing the likelihood yields the following two equations
$\operatorname {E} [X]={\bar {X}}$
$\operatorname {E} [\log X!]={\overline {\log X!}}$
which do not have an analytic solution.
Instead, the maximum likelihood estimates are approximated numerically by the Newton–Raphson method. In each iteration, the expectations, variances, and covariance of $X$ and $\log X!$ are approximated by using the estimates for $\lambda $ and $\nu $ from the previous iteration in the expression
$\operatorname {E} [f(x)]=\sum _{j=0}^{\infty }f(j){\frac {\lambda ^{j}}{(j!)^{\nu }Z(\lambda ,\nu )}}.$
This is continued until convergence of ${\hat {\lambda }}$ and ${\hat {\nu }}$.
Generalized linear model
The basic CMP distribution discussed above has also been used as the basis for a generalized linear model (GLM) using a Bayesian formulation. A dual-link GLM based on the CMP distribution has been developed,[10] and this model has been used to evaluate traffic accident data.[11][12] The CMP GLM developed by Guikema and Coffelt (2008) is based on a reformulation of the CMP distribution above, replacing $\lambda $ with $\mu =\lambda ^{1/\nu }$. The integral part of $\mu $ is then the mode of the distribution. A full Bayesian estimation approach has been used with MCMC sampling implemented in WinBugs with non-informative priors for the regression parameters.[10][11] This approach is computationally expensive, but it yields the full posterior distributions for the regression parameters and allows expert knowledge to be incorporated through the use of informative priors.
A classical GLM formulation for a CMP regression has been developed which generalizes Poisson regression and logistic regression.[13] This takes advantage of the exponential family properties of the CMP distribution to obtain elegant model estimation (via maximum likelihood), inference, diagnostics, and interpretation. This approach requires substantially less computational time than the Bayesian approach, at the cost of not allowing expert knowledge to be incorporated into the model.[13] In addition it yields standard errors for the regression parameters (via the Fisher Information matrix) compared to the full posterior distributions obtainable via the Bayesian formulation. It also provides a statistical test for the level of dispersion compared to a Poisson model. Code for fitting a CMP regression, testing for dispersion, and evaluating fit is available.[14]
The two GLM frameworks developed for the CMP distribution significantly extend the usefulness of this distribution for data analysis problems.
References
1. "Conway–Maxwell–Poisson Regression". SAS Support. SAS Institute, Inc. Retrieved 2 March 2015.
2. Shmueli G., Minka T., Kadane J.B., Borle S., and Boatwright, P.B. "A useful distribution for fitting discrete data: revival of the Conway–Maxwell–Poisson distribution." Journal of the Royal Statistical Society: Series C (Applied Statistics) 54.1 (2005): 127–142.
3. Conway, R. W.; Maxwell, W. L. (1962), "A queuing model with state dependent service rates", Journal of Industrial Engineering, 12: 132–136
4. Boatwright, P., Borle, S. and Kadane, J.B. "A model of the joint distribution of purchase quantity and timing." Journal of the American Statistical Association 98 (2003): 564–572.
5. Li B., Zhang H., Jiao H. "Some Characterizations and Properties of COM-Poisson Random Variables." Communications in Statistics - Theory and Methods, (2019).
6. Nadarajah, S. "Useful moment and CDF formulations for the COM–Poisson distribution." Statistical Papers 50 (2009): 617–622.
7. Daly, F. and Gaunt, R.E. " The Conway–Maxwell–Poisson distribution: distributional theory and approximation." ALEA Latin American Journal of Probability and Mathematical Statistics 13 (2016): 635–658.
8. Gaunt, R.E., Iyengar, S., Olde Daalhuis, A.B. and Simsek, B. "An asymptotic expansion for the normalizing constant of the Conway–Maxwell–Poisson distribution." To appear in Annals of the Institute of Statistical Mathematics (2017+) DOI 10.1007/s10463-017-0629-6
9. Zhang H., Tan K., Li B. "COM-negative binomial distribution: modeling overdispersion and ultrahigh zero-inflated count data." Frontiers of Mathematics in China, 2018, 13(4): 967–998.
10. Guikema, S.D. and J.P. Coffelt (2008) "A Flexible Count Data Regression Model for Risk Analysis", Risk Analysis, 28 (1), 213–223. doi:10.1111/j.1539-6924.2008.01014.x
11. Lord, D., S.D. Guikema, and S.R. Geedipally (2008) "Application of the Conway–Maxwell–Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes," Accident Analysis & Prevention, 40 (3), 1123–1134. doi:10.1016/j.aap.2007.12.003
12. Lord, D., S.R. Geedipally, and S.D. Guikema (2010) "Extension of the Application of Conway–Maxwell–Poisson Models: Analyzing Traffic Crash Data Exhibiting Under-Dispersion," Risk Analysis, 30 (8), 1268–1276. doi:10.1111/j.1539-6924.2010.01417.x
13. Sellers, K. S. and Shmueli, G. (2010), "A Flexible Regression Model for Count Data", Annals of Applied Statistics, 4 (2), 943–961
14. Code for COM_Poisson modelling, Georgetown Univ.
External links
• Conway–Maxwell–Poisson distribution package for R (compoisson) by Jeffrey Dunn, part of Comprehensive R Archive Network (CRAN)
• Conway–Maxwell–Poisson distribution package for R (compoisson) by Tom Minka, third party package
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
Smooth scheme
In algebraic geometry, a smooth scheme over a field is a scheme which is well approximated by affine space near any point. Smoothness is one way of making precise the notion of a scheme with no singular points. A special case is the notion of a smooth variety over a field. Smooth schemes play the role in algebraic geometry of manifolds in topology.
Definition
First, let X be an affine scheme of finite type over a field k. Equivalently, X has a closed immersion into affine space An over k for some natural number n. Then X is the closed subscheme defined by some equations g1 = 0, ..., gr = 0, where each gi is in the polynomial ring k[x1,..., xn]. The affine scheme X is smooth of dimension m over k if X has dimension at least m in a neighborhood of each point, and the matrix of derivatives (∂gi/∂xj) has rank at least n−m everywhere on X.[1] (It follows that X has dimension equal to m in a neighborhood of each point.) Smoothness is independent of the choice of immersion of X into affine space.
The condition on the matrix of derivatives is understood to mean that the closed subset of X where all (n−m) × (n − m) minors of the matrix of derivatives are zero is the empty set. Equivalently, the ideal in the polynomial ring generated by all gi and all those minors is the whole polynomial ring.
In geometric terms, the matrix of derivatives (∂gi/∂xj) at a point p in X gives a linear map Fn → Fr, where F is the residue field of p. The kernel of this map is called the Zariski tangent space of X at p. Smoothness of X means that the dimension of the Zariski tangent space is equal to the dimension of X near each point; at a singular point, the Zariski tangent space would be bigger.
More generally, a scheme X over a field k is smooth over k if each point of X has an open neighborhood which is a smooth affine scheme of some dimension over k. In particular, a smooth scheme over k is locally of finite type.
There is a more general notion of a smooth morphism of schemes, which is roughly a morphism with smooth fibers. In particular, a scheme X is smooth over a field k if and only if the morphism X → Spec k is smooth.
Properties
A smooth scheme over a field is regular and hence normal. In particular, a smooth scheme over a field is reduced.
Define a variety over a field k to be an integral separated scheme of finite type over k. Then any smooth separated scheme of finite type over k is a finite disjoint union of smooth varieties over k.
For a smooth variety X over the complex numbers, the space X(C) of complex points of X is a complex manifold, using the classical (Euclidean) topology. Likewise, for a smooth variety X over the real numbers, the space X(R) of real points is a real manifold, possibly empty.
For any scheme X that is locally of finite type over a field k, there is a coherent sheaf Ω1 of differentials on X. The scheme X is smooth over k if and only if Ω1 is a vector bundle of rank equal to the dimension of X near each point.[2] In that case, Ω1 is called the cotangent bundle of X. The tangent bundle of a smooth scheme over k can be defined as the dual bundle, TX = (Ω1)*.
Smoothness is a geometric property, meaning that for any field extension E of k, a scheme X is smooth over k if and only if the scheme XE := X ×Spec k Spec E is smooth over E. For a perfect field k, a scheme X is smooth over k if and only if X is locally of finite type over k and X is regular.
Generic smoothness
A scheme X is said to be generically smooth of dimension n over k if X contains an open dense subset that is smooth of dimension n over k. Every variety over a perfect field (in particular an algebraically closed field) is generically smooth.[3]
Examples
• Affine space and projective space are smooth schemes over a field k.
• An example of a smooth hypersurface in projective space Pn over k is the Fermat hypersurface x0d + ... + xnd = 0, for any positive integer d that is invertible in k.
• An example of a singular (non-smooth) scheme over a field k is the closed subscheme x2 = 0 in the affine line A1 over k.
• An example of a singular (non-smooth) variety over k is the cuspidal cubic curve x2 = y3 in the affine plane A2, which is smooth outside the origin (x,y) = (0,0).
• A 0-dimensional variety X over a field k is of the form X = Spec E, where E is a finite extension field of k. The variety X is smooth over k if and only if E is a separable extension of k. Thus, if E is not separable over k, then X is a regular scheme but is not smooth over k. For example, let k be the field of rational functions Fp(t) for a prime number p, and let E = Fp(t1/p); then Spec E is a variety of dimension 0 over k which is a regular scheme, but not smooth over k.
• Schubert varieties are in general not smooth.
Notes
1. The definition of smoothness used in this article is equivalent to Grothendieck's definition of smoothness by Theorems 30.2 and Theorem 30.3 in: Matsumura, Commutative Ring Theory (1989).
2. Theorem 30.3, Matsumura, Commutative Ring Theory (1989).
3. Lemma 1 in section 28 and Corollary to Theorem 30.5, Matsumura, Commutative Ring Theory (1989).
References
• D. Gaitsgory's notes on flatness and smoothness at http://www.math.harvard.edu/~gaitsgde/Schemes_2009/BR/SmoothMaps.pdf
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
• Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics (2nd ed.), Cambridge University Press, ISBN 978-0-521-36764-6, MR 1011461
See also
• Étale morphism
• Dimension of an algebraic variety
• Glossary of scheme theory
• Smooth completion
| Wikipedia |
Triangle $ABC$ has side-lengths $AB = 12, BC = 24,$ and $AC = 18.$ The line through the incenter of $\triangle ABC$ parallel to $\overline{BC}$ intersects $\overline{AB}$ at $M$ and $\overline{AC}$ at $N.$ What is the perimeter of $\triangle AMN?$
$\textbf{(A)}\ 27 \qquad \textbf{(B)}\ 30 \qquad \textbf{(C)}\ 33 \qquad \textbf{(D)}\ 36 \qquad \textbf{(E)}\ 42$
Let $O$ be the incenter of $\triangle{ABC}$. Because $\overline{MO} \parallel \overline{BC}$ and $\overline{BO}$ is the angle bisector of $\angle{ABC}$, we have
\[\angle{MBO} = \angle{CBO} = \angle{MOB} = \frac{1}{2}\angle{MBC}\]
It then follows due to alternate interior angles and base angles of isosceles triangles that $MO = MB$. Similarly, $NO = NC$. The perimeter of $\triangle{AMN}$ then becomes\begin{align*} AM + MN + NA &= AM + MO + NO + NA \\ &= AM + MB + NC + NA \\ &= AB + AC \\ &= \boxed{30} \end{align*} | Math Dataset |
Creative and productive sets
In computability theory, productive sets and creative sets are types of sets of natural numbers that have important applications in mathematical logic. They are a standard topic in mathematical logic textbooks such as Soare (1987) and Rogers (1987).
Definition and example
For the remainder of this article, assume that $\varphi _{i}$ is an admissible numbering of the computable functions and Wi the corresponding numbering of the recursively enumerable sets.
A set A of natural numbers is called productive if there exists a total recursive (computable) function $f$ so that for all $i\in \mathbb {N} $, if $W_{i}\subseteq A$ then $f(i)\in A\setminus W_{i}.$ The function $f$ is called the productive function for $A.$
A set A of natural numbers is called creative if A is recursively enumerable and its complement $\mathbb {N} \setminus A$ is productive. Not every productive set has a recursively enumerable complement, however, as illustrated below.
The archetypal creative set is $K=\{i\mid i\in W_{i}\}$, the set representing the halting problem. Its complement ${\bar {K}}=\{i\mid i\not \in W_{i}\}$ is productive with productive function f(i) = i (the identity function).
To see this, we apply the definition of a productive function and show separately that $i\in {\bar {K}}$ and $i\not \in W_{i}$:
• $i\in {\bar {K}}$: suppose $i\in K$, then $i\in W_{i}$, now given that $W_{i}\subseteq {\bar {K}}$ we have $i\in {\bar {K}}$, this leads to a contradiction. So $i\in {\bar {K}}$.
• $i\not \in W_{i}$: in fact if $i\in W_{i}$, then it would be true that $i\in K$, but we have demonstrated the contrary in the previous point. So $i\not \in W_{i}$.
Properties
No productive set A can be recursively enumerable, because whenever A contains every number in an r.e. set Wi it contains other numbers, and moreover there is an effective procedure to produce an example of such a number from the index i. Similarly, no creative set can be decidable, because this would imply that its complement, a productive set, is recursively enumerable.
Any productive set has a productive function that is injective and total.
The following theorems, due to Myhill (1955), show that in a sense all creative sets are like $K$ and all productive sets are like ${\bar {K}}$.[1]
Theorem. Let P be a set of natural numbers. The following are equivalent:
• P is productive.
• ${\bar {K}}$ is 1-reducible to P.
• ${\bar {K}}$ is m-reducible to P.
Theorem. Let C be a set of natural numbers. The following are equivalent:
• C is creative.
• C is 1-complete
• C is recursively isomorphic to K, that is, there is a total computable bijection f on the natural numbers such that f(C) = K.
Applications in mathematical logic
The set of all provable sentences in an effective axiomatic system is always a recursively enumerable set. If the system is suitably complex, like first-order arithmetic, then the set T of Gödel numbers of true sentences in the system will be a productive set, which means that whenever W is a recursively enumerable set of true sentences, there is at least one true sentence that is not in W. This can be used to give a rigorous proof of Gödel's first incompleteness theorem, because no recursively enumerable set is productive. The complement of the set T will not be recursively enumerable, and thus T is an example of a productive set whose complement is not creative.
History
The seminal paper of Post (1944) defined the concept he called a Creative set. Reiterating, the set $K$ referenced above and defined as the domain of the function $d(x)=[[x]](x)+1$ that takes the diagonal of all enumerated 1-place computable partial functions and adds 1 to them is an example of a creative set.[2] Post gave a version of Gödel's Incompleteness Theorem using his creative sets, where originally Gödel had in some sense constructed a sentence that could be freely translated as saying "I am unprovable in this axiomatic theory." However, Gödel's proof did not work from the concept of true sentences, and rather used the concept of a consistent theory, which led to the second incompleteness theorem. After Post completed his version of incompleteness he then added the following:
"The conclusion is unescapable that even for such a fixed, well defined body of mathematical propositions, mathematical thinking is, and must remain, essentially creative."[2]
The usual creative set $K$ defined using the diagonal function $d(x)$ has its own historical development. Alan Turing in a 1936 article on the Turing machine showed the existence of a universal computer that computes the $\Phi $ function. The function $\Phi $ is defined such that $\Phi (w,x)=$ (the result of applying the instructions coded by $w$ to the input $x$ ), and is universal in the sense that any calculable partial function $f$ is given by $f(x)=\Phi (e,x)$ for all $x$ where $e$ codes the instructions for $f$. Using the above notation $\Phi (e,x)=[[e]](x)$, and the diagonal function arises quite naturally as $d(x)=[[x]](x)+1$. Ultimately, these ideas are connected to Church's thesis that says the mathematical notion of computable partial functions is the correct formalization of an effectively calculable partial function, which can neither be proved or disproved. Church used lambda calculus, Turing an idealized computer, and later Emil Post in his approach, all of which are equivalent.
Deborah Joseph and Paul Young (1985) formulated an analogous concept, polynomial creativity, in computational complexity theory, and used it to provide potential counterexamples to the Berman–Hartmanis conjecture on isomorphism of NP-complete sets.
Notes
1. Soare (1987); Rogers (1987).
2. Enderton (2010), pp. 79, 80, 120.
References
• Davis, Martin (1958), Computability and unsolvability, Series in Information Processing and Computers, New York: McGraw-Hill, MR 0124208. Reprinted in 1982 by Dover Publications.
• Enderton, Herbert B. (2010), Computability Theory: An Introduction to Recursion Theory, Academic Press, ISBN 978-0-12-384958-8.
• Joseph, Deborah; Young, Paul (1985), "Some remarks on witness functions for nonpolynomial and noncomplete sets in NP", Theoretical Computer Science, 39 (2–3): 225–237, doi:10.1016/0304-3975(85)90140-9, MR 0821203
• Kleene, Stephen Cole (2002), Mathematical logic, Mineola, NY: Dover Publications Inc., ISBN 0-486-42533-9, MR 1950307. Reprint of the 1967 original, Wiley, MR0216930.
• Myhill, John (1955), "Creative sets", Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 1 (2): 97–108, doi:10.1002/malq.19550010205, MR 0071379.
• Post, Emil L. (1944), "Recursively enumerable sets of positive integers and their decision problems", Bulletin of the American Mathematical Society, 50 (5): 284–316, doi:10.1090/S0002-9904-1944-08111-1, MR 0010514
• Rogers, Hartley, Jr. (1987), Theory of recursive functions and effective computability (2nd ed.), Cambridge, MA: MIT Press, ISBN 0-262-68052-1, MR 0886890{{citation}}: CS1 maint: multiple names: authors list (link).
• Soare, Robert I. (1987), Recursively enumerable sets and degrees: A study of computable functions and computably generated sets, Perspectives in Mathematical Logic, Berlin: Springer-Verlag, ISBN 3-540-15299-7, MR 0882921.
| Wikipedia |
\begin{document}
\numberwithin{equation}{section}
\title{Singular limits of certain Hilbert-Schmidt integral operators} \author{M. Bertola, E. Blackstone, A. Katsevich and A. Tovbis } \thanks{The work of the first author was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). The work of the second author was supported in part by the European Research Council, Grant Agreement No. 682537. The work of the third author was supported in part by NSF grants DMS-1615124 and DMS-1906361. The work of the fourth author was supported in part by NSF grant DMS- 2009647.\\ Concordia University, Montreal, Canada (first author), Department of Mathematics, University of Michigan, Ann Arbor, MI 48109-1043 (second author), and Department of Mathematics, University of Central Florida, Orlando, FL 32816-1364 (third and fourth authors)\\ {\it E-mail address:} [email protected], [email protected], [email protected],\\[email protected]} \date{}
\begin{abstract} In this paper we study the small-$\lambda$ spectral asymptotics of an integral operator $\mathscr{K}$ defined on two multi-intervals $J$ and $E$, when the multi-intervals touch each other (but their interiors are disjoint). The operator $\mathscr{K}$ is closely related to the multi-interval Finite Hilbert Transform (FHT). This case can be viewed as a singular limit of self-adjoint Hilbert-Schmidt integral operators with so-called integrable kernels, where the limiting operator is still bounded, but has a continuous spectral component. The regular case when $\text{dist}(J,E)>0$, and $\mathscr{K}$ is of the Hilbert-Schmidt class, was studied in an earlier paper by the authors. The main assumption in this paper is that $U=J\cup E$ is a single interval. We show that the eigenvalues of $\mathscr{K}$, if they exist, do not accumulate at $\lambda=0$. Combined with the results in an earlier paper by the authors, this implies that $H_p$, the subspace of discontinuity (the span of all eigenfunctions) of $\mathscr{K}$, is finite dimensional and consists of functions that are smooth in the interiors of $J$ and $E$. We also obtain an approximation to the kernel of the unitary transformation that diagonalizes $\mathscr{K}$, and obtain a precise estimate of the exponential instability of inverting $\mathscr{K}$. Our work is based on the method of Riemann-Hilbert problem and the nonlinear steepest-descent method of Deift and Zhou. \end{abstract}
\maketitle
\section{Introduction}\label{sec-intro}
Let $J,E$ be multi-intervals, where a multi-interval is defined to be a union of finitely many non-intersecting, closed, possibly unbounded, intervals, each with non empty interior. We assume that $J,E$ are bounded and $\mathring{J}\cap\mathring{E}=\emptyset$, where $\mathring{J}$ denotes the interior of the set $J$. Endpoints of $J$ or $E$ which belong to both $J,E$ are called \textit{double} endpoints. Let \begin{align}\label{kernel K}
K(x,y):=\frac{\chi_J(x)\chi_E(y)-\chi_J(y)\chi_E(x)}{\pi(x-y)}, \end{align} where $\chi_J$ denotes the characteristic function of $J$, and define the operator $\mathscr{K}:L^2(U)\to L^2(U)$, with $U:=J\cup E$, as \begin{align}\label{operator K}
\mathscr{K}[f](x):=\int_{U}K(x,y)f(y)dy. \end{align} The operator $A:L^2(J)\to L^2(E)$, defined as \begin{align}\label{FHT A}
A[f](x)=\frac{1}{\pi}\int_J\frac{f(y)}{y-x}dy, \end{align} is known as the finite Hilbert transform (FHT). Let $A^\dagger$ denote its adjoint. It is simple to verify that $A^\dagger:L^2(E)\to L^2(J)$ is given by \begin{align}
A^\dagger[g](y)=\frac{1}{\pi}\int_E\frac{g(x)}{y-x}dx, \end{align} $\mathscr{K}$ is self-adjoint, $\mathscr{K}=A\oplus A^\dagger$, $\mathscr{K}$ can be represented in matrix form as \begin{equation}\label{K-def}
\mathscr{K}=\begin{bmatrix} 0 & A \\ A^\dagger & 0 \end{bmatrix}:L^2(J)\oplus L^2(E)\to L^2(J)\oplus L^2(E), \end{equation} and \begin{align*}
\mathscr{K}^2=AA^\dagger\oplus A^\dagger A=\begin{bmatrix} AA^\dagger & 0 \\ 0 & A^\dagger A \end{bmatrix}:L^2(J)\oplus L^2(E)\to L^2(J)\oplus L^2(E). \end{align*} The kernel of $\mathscr{K}$ is an `integrable' kernel, and thus it is well-known that the resolvent of $\mathscr{K}$ can be expressed in terms of the solution to a particular Riemann-Hilbert problem (RHP) \cite{IIKS}.
Our main interest is to study spectral properties of $\mathscr{K}$ and the operators $A^\dagger A$ and $A A^\dagger$, which depend in an essential way on the geometry of $J$ and $E$. The case when $U:=J\cup E=\mathbb R$ was considered in \cite{BKT19}, where it was shown that the spectrum of $A^\dagger A$ and $A A^\dagger$ is the segment $[0,1]$, the spectrum is purely absolutely continuous, and its multiplicity equals to the number of double endpoints. An explicit diagonalization of these two operators was also discovered, see \cite[Theorem 5.3]{BKT19}. In a subsequent paper \cite{BKT20} the setting is exactly the same as in this paper, i.e. $U=J\cup E \subsetneq \mathbb R$. Using the results of \cite{BKT19}, the following Theorem (\cite[Theorem 2.1]{BKT20}) was established:
\begin{theorem}\label{thm: spec prop} Let $\mathscr K = A \oplus A^\dagger: L^2(U) \to L^2(U)$ be the operator given by \eqref{K-def} and let $\Sp(\mathscr K)$ denote the spectrum of $\mathscr{K}$. Here $U=J\cup E$, and $J,E\subset{\mathbb R}$ are multi-intervals with disjoint interiors. One has: \begin{enumerate} \item $\Sp(\mathscr K)\subseteq [-1,1]$; \item The points $\lambda=\pm 1$ and $\lambda=0$ are not eigenvalues of $\mathscr K$. The eigenvalues of $\mathscr K$, if they exist, are {symmetric with respect to $\lambda=0$ and have finite multiplicities}. Moreover, they can accumulate only at $\lambda=0$; \item If there are $n\ge 1$ ($n\in{\mathbb N}$) double endpoints, then $\Sp_{ac}(\mathscr K) = [-1,1]$, and the multiplicity of $\Sp_{ac}(\mathscr K)$ equals $n$;
\item If there are no double endpoints, then $\mathscr K$ is of trace class. In this case, $\Sp(\mathscr K)$ consists only of eigenvalues and $\lambda=0$, which is the accumulation point of the eigenvalues; \item The singular continuous component is empty, i.e., $\Sp_{sc}(\mathscr K) = \emptyset$. \end{enumerate} \end{theorem} \noindent
A similar Theorem regarding the spectral properties of $A^\dagger A$ and $A A^\dagger$ was also established, see \cite[Theorem 2.4]{BKT20}.
The main goal of this paper is to show that when $U$ consists of a single interval, then the eigenvalues of $\mathscr K$ do not accumulate at zero. Combined with the above Theorem this implies that $\mathscr K$ (and, therefore, both $A^\dagger A$ and $A A^\dagger$) has finitely many eigenvalues. This is achieved by reducing the spectral analysis of $\mathscr K$ to solving a RHP, and finding its approximate solution using the Deift-Zhou steepest descent method. As a byproduct of our analysis, we find an approximation to the kernel of the unitary transformation that diagonalizes $\mathscr{K}$ (and, therefore, $A^\dagger A$, and $A A^\dagger$), and obtain a precise estimate of the exponential instability (or, the degree of ill-posedness) of inverting the operators $\mathscr{K}$, $A^\dagger A$, and $A A^\dagger$.
More precisely, we show that as $\lambda\to0$ ($\varkappa=-\ln|\lambda/2|\to\infty$), to leading order solving the equation $\mathscr{K}f=\phi$ is equivalent to computing integrals of the form \begin{equation}\begin{split}
\int^{\infty}_{*} e^{\varkappa}{\begin{Bmatrix}\cos\\ \sin
\end{Bmatrix}}({\mathfrak g}_{\text{im}}'(x_0)(x-x_0)\varkappa)\tilde G_k(\varkappa)d\varkappa, \end{split}
\end{equation} where $\tilde G_k(\varkappa)$ can be computed from the right-hand side $\phi$ in a stable way (i.e., as a bounded operator between the appropriate $L^2$ spaces). This means that the degree of ill-posedness of inverting $\mathscr{K}$ is $\exp\left(\varkappa/|{\mathfrak g}_{\text{im}}'(x_0)|\right)$. Our result provides a relationship between the instability of inverting $\mathscr{K}$ and the geometry of the problem (location of the intervals $J$ and $E$), which is encapsulated in the function ${\mathfrak g}_{\text{im}}'(x_0)$. According to Proposition \ref{prop: g-function}, the function ${\mathfrak g}_{\text{im}}'$ is positive on $\mathring{E}$, negative on $\mathring{J}$ and has $O((z-b_j)^{-\frac{1}{2}})$ behavior at double points $b_j$. Similar results are valid for inverting the operators $A^\dagger A$ and $A A^\dagger$.
For an interval $I\subset \mathbb R$, let $L^2(I,\mathbb R^n)$ denote the Hilbert space of $n$-dimensional real vector-valued functions defined on $I$: $\vec h(\lambda)=(h_1(\lambda),\dots,h_n(\lambda))$, $\lambda\in I$, with the standard definition of the norm: \begin{equation}\label{hnorm} \Vert \vec h \Vert_{L^2(I,\mathbb R^n)}^2:=\sum_{j=1}^n\int_I h_j^2(\lambda) d\lambda. \end{equation}
Let $H_p, H_{c}, H_{ac}\subset L^2(U)$ denote the subspaces of discontinuity (the span of all eigenfunctions), continuity, and absolute continuity, respectively, with respect to $\mathscr{K}$. By Theorem~\ref{thm: spec prop}, $H_c=H_{ac}$. If $H_p\not=\emptyset$, let $L'$ be the number of distinct positive eigenvalues of $\mathscr{K}$: $0<\lambda_1 < \dots < \lambda_{L'}$. It follows from statement (2) of Theorem~\ref{thm: spec prop} that $-\lambda_j$, $j=1,2,\dots,L'$, are the negative eigenvalues of $\mathscr{K}$. Let $\mathrm {Id}$ denote the identity operator.
\begin{theorem}[main result]\label{thm:main} Assume there are $n\geq1$ double points and no gaps, i.e. $U=J\cup E=[a_1,a_2]$. One has: \begin{enumerate} \item If $H_p\not=\emptyset$, then $2L:=\text{dim}(H_p)<\infty$ and $H_p\subset C^\infty(\mathring{J}\cup\mathring{E})$. \item There exist $n$ bounded operators ${\mathcal Q}_j:\,L^2(U)\to L^2([-1,1],{\mathbb R})$, $1\leq j\leq n$, and a projector $P:L^2(U)\to \mathbb R^{2L}$ such that the operator $T:L^2(U)\to L^2([-1,1],\mathbb{R}^n)\times \mathbb{R}^{2L}$ given by \begin{equation}\label{main-diag-oper} T f=(T_{ac}f, Pf),\ T_{ac}:=({\mathcal Q}_1,\dots,{\mathcal Q}_n):\,L^2(U)\to L^2([-1,1],{\mathbb R}^n), \end{equation} is unitary, and \begin{equation}\begin{split}\label{diag-two-comps}
T \mathscr{K}T^\dagger=&\lambda\mathrm {Id} \text{ on }L^2([-1,1],\mathbb R^n),\\
T \mathscr{K}T^\dagger=&D \text{ on }\mathbb R^{2L}. \end{split} \end{equation} Here $D$ is a diagonal matrix, consisting of the eigenvalues of $\mathscr{K}$ repeated according to their multiplicity. The first line is understood in the sense of operator equality, where $\lambda$ is a multiplication operator $\mathbb R^n\to \mathbb R^n$. The operators ${\mathcal Q}_j$ are not unique. If $H_p=\emptyset$, then $P$ is absent from \eqref{main-diag-oper} (i.e., $T=T_{ac}$), and the second line is absent from \eqref{diag-two-comps}. \item The operators ${\mathcal Q}_j$ can be found so that their kernels $Q_j(x;\lambda)$ satisfy \begin{equation}\label{Qj smoothness} Q_j\in C^{\infty}\left((\mathring{J}\cup\mathring{E})\times \left((-1,1)\setminus\Xi\right)\right), 1\leq j\leq n, \end{equation} where $\Xi$ is a finite set, and, for any closed interval $I\subset \mathring{J}\cup \mathring{E}$, $n\ge 0$, and $\varepsilon>0$, \begin{equation}\label{E phi 3 II}
\sup_{x\in I,\varepsilon<|\lambda|<1}\left|(d/dx)^{n} Q_j(x;\lambda)\right|<\infty. \end{equation}
\item Explicit expressions that approximate $|\lambda|^{1/2}Q_j(x;\lambda)$, $k=1,\ldots,n$, with accuracy $\BigO{\varkappa^{-1}}$, where $\varkappa=-\ln|{\lambda}/{2}|$, are given in Proposition~\ref{Eprime simple} below. \end{enumerate} \end{theorem}
For an interval $I\subset U$, let $\pi_I^\dagger:L^2(I)\to L^2(U)$ denote the extension by zero of a function defined on $I$ to all of $U$. \begin{corollary}\label{cor:main} Assume there are $n\geq1$ double points and no gaps, i.e. $J\cup E=[a_1,a_2]$. The following assertions hold if $I=J$, $B=A^\dagger A$ and if $I=E$, $B=AA^\dagger$.
There exists a projector $P_I:L^2(I)\to \mathbb R^L$ such that the operator $T_I:L^2(I)\to L^2([0,1],\mathbb R^n)\times R^L$ given by \begin{equation}\label{main-diag-K2} T_I f=(T_{ac,I}f, P_I f),\ T_{ac,I}:=T_{ac}\pi_I^\dagger:\,L^2(I)\to L^2([0,1],\mathbb R^n), \end{equation} where $T_{ac}$ is the same as in Theorem~\ref{thm:main}, is unitary, and \begin{equation}\begin{split}\label{diag-two-K2}
T_I BT_I^\dagger=&\lambda^2\mathrm {Id} \text{ on }L^2([0,1],\mathbb R^n),\\
T_I BT_I^\dagger=&D \text{ on }\mathbb R^L. \end{split} \end{equation} Here $D$ is a diagonal matrix, consisting of the squares of positive eigenvalues of $\mathscr{K}$ repeated according to their multiplicity. The first line is understood in the sense of operator equality, where $\lambda^2$ is a multiplication operator $\mathbb R^n\to\mathbb R^n$. If $H_p=\emptyset$, then $P_I$ is absent from \eqref{main-diag-K2} (i.e., $T_I=T_{ac,I}$), and the second line is absent from \eqref{diag-two-K2}. \end{corollary}
The paper is organized as follows. In Section \ref{sec: K + RHP}, we introduce a RHP whose solution is denoted by $\Gamma(z;\lambda)$ and show that the kernel of the resolvent of $\mathscr{K}, AA^\dagger, A^\dagger A$ is expressed in terms of $\Gamma(z;\lambda)$. {We also prove Corollary~\ref{cor:main}, assuming Theorem~\ref{thm:main} holds.}
Section \ref{sec: Gamma} is dedicated to computing the small-$\lambda$ asymptotics of $\Gamma(z;\lambda)$ in various regions of the complex plane. In Section \ref{sec:jump}, we compute the small-$\lambda$ asymptotics of the jump of the kernel of the resolvent $\mathscr{R}$ in the $\lambda$-plane and express said jump as a quadratic form. In Section \ref{sec: exact to approx}, we finish the proof of Theorem~\ref{thm:main}, obtain small-$\lambda$ approximation to the kernels $|\lambda|^{1/2}Q_j(x;\lambda)$, and compute the degree of ill-posedness of inverting $\mathscr{K}$. Lastly, supplementary material can be found in the Appendices. A beautiful proof that certain complicated matrix is positive definite, which is based on the calculus of residues, deserves a special mention.
\begin{remark}[Notation] Throughout this text we will encounter several functions which are multi-valued on oriented curves (or union of oriented curves) in the complex plane. Let $\Sigma\subset\mathbb{C}$ be an oriented curve with a $+$ and $-$ side, determined by the orientation, and suppose $f:\mathbb{C}\setminus\Sigma\to\mathbb{C}$ is analytic. For any $w\in\Sigma$, we let $f(w_+)$, $f(w_-)$ denote the limiting value of $f(z)$ as $z\notin\Sigma$ approaches $w\in\Sigma$, non-tangentially, from the $+$, $-$ side of $\Sigma$, respectively. \end{remark}
\section{The resolvent and resolution of the identity of $\mathscr{K}$}\label{sec: K + RHP}
We begin with the RHP for $\Gamma(z;\lambda)$ and show its relation to the operator $\mathscr{K}$. \begin{problem}\label{RHPGamma} Find a $2\times2$ matrix-function $\Gamma(z;\lambda)$ which, for any $\lambda\in\mathbb{C}\setminus[-1,1]$, satisfies: \begin{enumerate}
\item $\Gamma(z;\lambda)$ is analytic for $z\in\overline{\mathbb{C}}\setminus U$,
\item $\Gamma(z;\lambda)$ has the jump condition
\begin{align}\label{gamma jump}
\Gamma(z_+;\lambda)=\Gamma(z_-;\lambda)\left({\bf 1}-\frac{2i}{\lambda}f(z)g^t(z)\right), \qquad z\in \mathring{J}\cup\mathring{E},
\end{align}
where ${\bf 1}$ is the identity matrix, $^t$ denotes transpose, and
\begin{align}\label{vec fg}
f^t(z):=\begin{bmatrix} \chi_E(z) & \chi_J(z) \end{bmatrix}, \qquad g^t(z):=\begin{bmatrix} -\chi_J(z) & \chi_E(z) \end{bmatrix},
\end{align}
\item $\Gamma(\infty;\lambda)={\bf 1}$,
\item $\Gamma(z_\pm;\lambda)\in L^2_{\mathrm{loc}}(U)$. \end{enumerate} \end{problem} Notice that \eqref{gamma jump} is equivalent to \begin{align}
\Gamma(z_+;\lambda)&=\Gamma(z_-;\lambda)\begin{cases}
\begin{bmatrix} 1 & -\frac{2i}{\lambda} \\ 0 & 1 \end{bmatrix}, & z\in \mathring{E}, \\
\begin{bmatrix} 1 & 0 \\ \frac{2i}{\lambda} & 1 \end{bmatrix}, & z\in \mathring{J}.
\end{cases} \end{align} Condition (4) of RHP \ref{RHPGamma} guarantees that if a solution exists (a solution does indeed exist, see below \eqref{res kernel}), it must be unique. For convenience of matrix calculations, throughout the paper we use the Pauli matrices $$ \sigma_1= \begin{bmatrix} 0 & 1\\ 1&0 \end{bmatrix},~~~ \sigma_2= \begin{bmatrix} 0 & -i\\ i&0 \end{bmatrix},~~~ \sigma_3= \begin{bmatrix} 1 & 0\\ 0&-1 \end{bmatrix}. $$ \begin{remark}\label{rem-sym} Due to the uniqueness of the solution to RHP \ref{RHPGamma}, we can easily discover the symmetries \begin{align}
\Gamma(z;\lambda)=\overline{\Gamma(\overline{z};\overline{\lambda})}, \qquad \Gamma(z;\lambda)=\sigma_3\Gamma(z;-\lambda)\sigma_3,\label{sig2 conj} \end{align} because $\Gamma(z;\lambda)$, $\overline{\Gamma(\overline{z};\overline{\lambda})}$, and $\sigma_3\Gamma(z;-\lambda)\sigma_3$ solve RHP \ref{RHPGamma}. Moreover, if we wish to solve the RHP satisfying properties (1), (3), (4) from RHP \ref{RHPGamma} and the jump condition \begin{align*}
\tilde{\Gamma}(z_+;\lambda)=\tilde{\Gamma}(z_-;\lambda)\left({\bf 1}+\frac{2i}{\lambda}g(z)f^t(z)\right), \qquad z\in \mathring{E}\cup\mathring{J}, \end{align*} (i.e. the jump matrix on $\mathring{E},\mathring{J}$ from property (2) of RHP \ref{RHPGamma} have been interchanged) then \begin{align*}
\tilde{\Gamma}(z;\lambda)=\sigma_2\Gamma(z;\lambda)\sigma_2=\Gamma^{-t}(z;\lambda), \end{align*} where $^{-t}$ denote transpose followed by inverse. \end{remark} Let $\mathscr{R}(\lambda):L^2(U)\to L^2(U)$ denote the resolvent of $\mathscr{K}$, defined by \begin{align}\label{res of K}
\mathrm {Id}+\mathscr{R}(\lambda)=\left(\mathrm {Id}-\frac{1}{\lambda}\mathscr{K}\right)^{-1}. \end{align}
\begin{remark} \label{rem-std} The standard form of the resolvent of an operator $T$ is $\mathcal R(\lambda;T)=(\lambda{\mathrm {Id}}-T)^{-1}$ (according to \cite[p. 920, Section X.6]{DS57}). Thus, the `normalized' resolvent of \eqref{res of K} satisfies \begin{align*}
\lambda^{-1}({\mathrm {Id}}+\mathscr{R}(\lambda))=\mathcal R(\lambda;T). \end{align*} \end{remark} It was shown in \cite[Theorem 4.7, Section 4.2]{BKT20} that \begin{enumerate}
\item $\mathscr{R}(\lambda)$ is an integral operator with kernel $R:U\times U\times(\overline{\mathbb{C}}\setminus[-1,1])\to\mathbb{C}$ defined as
\begin{align}\label{res kernel}
R(x,y;\lambda)=\frac{g^t(y)\Gamma^{-1}(y;\lambda)\Gamma(x;\lambda)f(x)}{\pi\lambda(x-y)},
\end{align}
\item RHP \ref{RHPGamma} is solvable if and only if $\mathrm {Id}-\frac{1}{\lambda}\mathscr{K}$ has a bounded inverse { if and only if $\lambda\notin[-1,1]$},
\item The solution of RHP \ref{RHPGamma} $\Gamma(z;\lambda)$ can be continued in $\lambda$ onto the upper, lower shores of $(-1,1)\setminus\{0\}$ and is denoted $\Gamma(z;\lambda_+), \Gamma(z;\lambda_-)$, respectively. \end{enumerate} The resolvent kernel $R(x,y;\lambda)$ has the symmetries $R(x,y;\lambda)=R(y,x;\lambda)$, $\overline{R(x,y;\lambda)}=R(x,y;\overline{\lambda})$ and can be expressed as \begin{multline}\label{full-resol}
R(x,y;\lambda)=\chi_E(x)\chi_E(y)R_{EE}(x,y;\lambda)+\chi_E(x)\chi_J(y)R_{EJ}(x,y;\lambda) \\
+\chi_J(x)\chi_E(y)R_{JE}(x,y;\lambda)+\chi_J(x)\chi_J(y)R_{JJ}(x,y;\lambda), \end{multline} where \begin{align}\label{ker RE RJ}
\begin{split}
R_{EE}(x,y;\lambda)&:=\frac{\begin{vmatrix} \Gamma_{11}(y;\lambda) & \Gamma_{11}(x;\lambda) \\ \Gamma_{21}(y;\lambda) & \Gamma_{21}(x;\lambda) \end{vmatrix}}{\pi\lambda(x-y)}, \qquad R_{EJ}(x,y;\lambda):=\frac{\begin{vmatrix} \Gamma_{12}(y;\lambda) & \Gamma_{11}(x;\lambda) \\ \Gamma_{22}(y;\lambda) & \Gamma_{21}(x;\lambda) \end{vmatrix}}{\pi\lambda(x-y)}, \\
R_{JE}(x,y;\lambda)&:=\frac{\begin{vmatrix} \Gamma_{11}(y;\lambda) & \Gamma_{12}(x;\lambda) \\ \Gamma_{21}(y;\lambda) & \Gamma_{22}(x;\lambda) \end{vmatrix}}{\pi\lambda(x-y)}, \qquad R_{JJ}(x,y;\lambda):=\frac{\begin{vmatrix} \Gamma_{12}(y;\lambda) & \Gamma_{12}(x;\lambda) \\ \Gamma_{22}(y;\lambda) & \Gamma_{22}(x;\lambda) \end{vmatrix}}{\pi\lambda(x-y)}.
\end{split} \end{align} The expressions \eqref{full-resol}, \eqref{ker RE RJ} are simply obtained by expanding \eqref{res kernel}, and the symmetry properties are obtained via Remark \ref{rem-sym}. Let ${\Gamma}_j(z;\lambda)$ denote the $j$th column of the matrix $\Gamma(z;\lambda)$, $j=1,2$. \begin{remark} \label{rem-anal-columns} Note that $\Gamma_2(z_{+};\lambda)=\Gamma_2(z_{-};\lambda)$ on $J$. So, $\Gamma_2(z;\lambda)$ is analytic on and around $J$, whereas $\Gamma_1(z;\lambda)$ is analytic on and around $E$. \end{remark}
Since the second column $\Gamma_2(z;\lambda)$ is analytic on $J$, and the first column $\Gamma_1(z;\lambda)$ is analytic on $E$, the symmetry condition from Remark \ref{rem-sym} yields \begin{equation}\label{GammaLambda symmetry1} {\Gamma}_2(y;\lambda)=\overline{{\Gamma}_2(y;\bar\lambda)}, \qquad {\Gamma}_1(x;\lambda)=\overline{{\Gamma}_1(x;\bar\lambda)} \end{equation} when $y\in J, x\in E$, so that the jump $\Delta_\lambda {\Gamma}_j(z;\lambda)$ of ${\Gamma}_j(z;\lambda)$ ($j=1,2$) over the segment of continuous spectrum on ${\mathbb R}$ when $y\in J, x\in E$ becomes \begin{equation}\label{jump-Gam_2}
\Delta_\lambda{\Gamma}_2(y;\lambda)=-2i\Im{\Gamma}_2(y;\lambda_-), \qquad \Delta_\lambda{\Gamma}_1(x;\lambda)=-2i\Im{\Gamma}_1(x;\lambda_-), \end{equation} where $\Delta_xf(x):=f_+(x)-f_-(x)$. Also notice that \eqref{GammaLambda symmetry1} implies that \begin{align}\label{GammaLambda symmetry2}
\begin{split}
\Re[\Gamma_2(y;\lambda_+)]&=\Re[\Gamma_2(y;\lambda_-)], \qquad \Im[\Gamma_2(y;\lambda_+)]=-\Im[\Gamma_2(y;\lambda_-)], ~~~ y\in J, \\
\Re[\Gamma_1(x;\lambda_+)]&=\Re[\Gamma_1(x;\lambda_-)], \qquad \Im[\Gamma_1(x;\lambda_+)]=-\Im[\Gamma_1(x;\lambda_-)], ~~ x\in E,
\end{split} \end{align} for $\lambda\in(-1,1)\setminus\Sigma$, where \begin{equation} \Sigma:=\{0,\pm\lambda_1,\dots,\pm\lambda_{L'}\}. \end{equation} Thus, $\Sigma$ is the set consisting of all the eigenvalues of $\mathscr{K}$ and $0$. Applying the same considerations to the four kernels of \eqref{ker RE RJ}, we obtain \begin{align}\label{jump-kerr RE RJ} \begin{split}
\Delta_{\lambda}R_{EE}(x,y;\lambda)=\frac{2i\Im\begin{vmatrix} \Gamma_{11}(y;\lambda_-) & \Gamma_{11}(x;\lambda_-) \\ \Gamma_{21}(y;\lambda_-) & \Gamma_{21}(x;\lambda_-) \end{vmatrix}}{-\pi\lambda(x-y)}, \quad \Delta_{\lambda}R_{EJ}(x,y;\lambda)&=\frac{2i\Im\begin{vmatrix} \Gamma_{12}(y;\lambda_-) & \Gamma_{11}(x;\lambda_-) \\ \Gamma_{22}(y;\lambda_-) & \Gamma_{21}(x;\lambda_-) \end{vmatrix}}{-\pi\lambda(x-y)}, \\
\Delta_{\lambda}R_{JE}(x,y;\lambda)=\frac{2i\Im\begin{vmatrix} \Gamma_{11}(y;\lambda_-) & \Gamma_{12}(x;\lambda_-) \\ \Gamma_{21}(y;\lambda_-) & \Gamma_{22}(x;\lambda_-) \end{vmatrix}}{-\pi\lambda(x-y)}, \quad \Delta_{\lambda}R_{JJ}(x,y;\lambda)&=\frac{2i\Im\begin{vmatrix} \Gamma_{12}(y;\lambda_-) & \Gamma_{12}(x;\lambda_-) \\ \Gamma_{22}(y;\lambda_-) & \Gamma_{22}(x;\lambda_-) \end{vmatrix}}{-\pi\lambda(x-y)}, \end{split} \end{align} and thus we have the relation \begin{multline}\label{jump-kerr}
\Delta_\lambda R(x,y;\lambda)=\chi_E(x)\chi_E(y)\Delta_\lambda R_{EE}(x,y;\lambda)+\chi_E(x)\chi_J(y)\Delta_\lambda R_{EJ}(x,y;\lambda) \\
+\chi_J(x)\chi_E(y)\Delta_\lambda R_{JE}(x,y;\lambda)+\chi_J(x)\chi_J(y)\Delta_\lambda R_{JJ}(x,y;\lambda). \end{multline} For future reference, notice that the numerators in \eqref{jump-kerr RE RJ} can be represented as \begin{align}
&2i\left(\det\left[\Re{\Gamma}_i(y;\lambda),\Im{\Gamma}_j(x;\lambda_-)\right]-\det\left[\Re{\Gamma}_j(x;\lambda),\Im{\Gamma}_i(y;\lambda_-)\right]\right), \label{jump-num} \end{align} where $(i,j)=(1,1), (2,1), (1,2), (2,2)$ corresponds to $\Delta_{\lambda}R_{EE}(x,y;\lambda)$, $\Delta_{\lambda}R_{EJ}(x,y;\lambda)$, $\Delta_{\lambda}R_{JE}(x,y;\lambda)$, and $\Delta_{\lambda}R_{JJ}(x,y;\lambda)$, respectively. This follows immediately by writing ${\Gamma}_j=\Re{\Gamma}_j+i\Im{\Gamma}_j$, $j=1,2$, and simplifying.
\begin{proposition}\label{prop: DeltaAbsLambdaRJRE ids} For $\lambda\in(-1,1)\setminus\Sigma$, we have the identities \begin{align*}
\Delta_\lambda R_{EE}(x,y;\lambda)&=\frac{-2i}{\pi\lambda(x-y)}\Im\begin{vmatrix} \Gamma_{11}(y;|\lambda|_-) & \Gamma_{11}(x;|\lambda|_-) \\ \Gamma_{21}(y;|\lambda|_-) & \Gamma_{21}(x;|\lambda|_-) \end{vmatrix}, & & x\in E, ~y\in E, \\
\Delta_\lambda R_{EJ}(x,y;\lambda)&=\frac{-2i}{\pi|\lambda|(x-y)}\Im\begin{vmatrix} \Gamma_{12}(y;|\lambda|_-) & \Gamma_{11}(x;|\lambda|_-) \\ \Gamma_{22}(y;|\lambda|_-) & \Gamma_{21}(x;|\lambda|_-) \end{vmatrix}, & & x\in E, ~y\in J, \\
\Delta_\lambda R_{JE}(x,y;\lambda)&=\frac{-2i}{\pi|\lambda|(x-y)}\Im\begin{vmatrix} \Gamma_{11}(y;|\lambda|_-) & \Gamma_{12}(x;|\lambda|_-) \\ \Gamma_{21}(y;|\lambda|_-) & \Gamma_{22}(x;|\lambda|_-) \end{vmatrix}, & & x\in J, ~y\in E, \\
\Delta_\lambda R_{JJ}(x,y;\lambda)&=\frac{-2i}{\pi\lambda(x-y)}\Im\begin{vmatrix} \Gamma_{12}(y;|\lambda|_-) & \Gamma_{12}(x;|\lambda|_-) \\ \Gamma_{22}(y;|\lambda|_-) & \Gamma_{22}(x;|\lambda|_-) \end{vmatrix}, & & x\in J, ~y\in J, \end{align*}
where $|\lambda|_-:=(-\lambda)_-$ when $\lambda<0$. In particular, $\Delta_{\lambda}R_{EE}(x,y;\lambda)$ and $\Delta_{\lambda}R_{JJ}(x,y;\lambda)$ are odd in $\lambda$, while $\Delta_{\lambda}R_{EJ}(x,y;\lambda)$ and $\Delta_{\lambda}R_{JE}(x,y;\lambda)$ are even in $\lambda$. \end{proposition}
\begin{proof}
For $\lambda>0$ there is nothing to prove, so we only need to consider $\lambda<0$. As is easily seen, if $A$ and $B$ are two functions defined in a neighborhood of some $\lambda_0\in(-1,0)$ and $-\lambda_0$, respectively, and {$A(\tilde\lambda)\equiv B(-\tilde\lambda)$ for any $\tilde\lambda\in\mathbb C\setminus [-1,1]$ close to $\lambda_0$}, then $\Delta_\lambda A(\lambda)|_{\lambda=\lambda_0}=-\Delta_\lambda B(\lambda)|_{\lambda=|\lambda_0|}$. Also, \eqref{sig2 conj} implies that $\Gamma_{ij}(x;-\lambda)=\Gamma_{ij}(x;\lambda)$ if $i=j$, and $\Gamma_{ij}(x;-\lambda)=-\Gamma_{ij}(x;\lambda)$ if $i\not=j$. Thus, replacing $\lambda$ with $-\lambda$ changes the sign of the numerators in the formulas for $R_{EE}$ and $R_{JJ}$, while the numerators in the formulas for $R_{EJ}$ and $R_{JE}$ do not change sign. Combining these two facts with \eqref{jump-kerr RE RJ} we prove the Proposition. \end{proof}
Knowledge of the jump of the resolvent unlocks the \textit{resolution of the identity} (also known as the spectral family) $\mathcal{E}(\lambda):L^2(U)\to L^2(U)$ for $\mathscr{K}$, which satisfies for any $\Delta=(a,b)$: \begin{align}\label{complete res of id} (\mathcal{E}(\Delta)f)(x):=\frac{-1}{2\pi i}\lim_{\delta\to0+}\lim_{\varepsilon\to0+} \int_{a+\delta}^{b-\delta}\left(\frac{\mathscr{R}[f](x;t+i\varepsilon)}{t+i\varepsilon}-\frac{\mathscr{R}[f](x;t-i\varepsilon)}{t-i\varepsilon}\right)dt. \end{align} Here we used \cite[p. 921]{DS57} for resolvents in standard form (see Remark \ref{rem-std}). Let $\mathcal E_{ac}(\lambda)$ be the family of projectors, which is obtained from $\mathcal E(\lambda)$ by replacing the projection onto $H_p$ with projection to zero. In other words, $\mathcal E(\Delta)=\mathcal E_{ac}(\Delta)$ for any interval that contains no eigenvalues of $\mathscr{K}$, and $\mathcal E_{ac}((\lambda_j-\varepsilon,\lambda_j+\varepsilon))\to {\bf 0}$ as $\varepsilon\to 0+$ for each eigenvalue $\lambda_j$. Pick any interval $\Delta$. Our definition implies that $\mathcal E_{ac}(\Delta)f=\mathcal E(\Delta)f$ for any $f\in H_{ac}$, and $\mathcal E_{ac}(\Delta)f=0$ for any $f\in H_p$. We use the subscript ``$ac$'', because $\mathcal E_{ac}$ is closely connected with $\mathscr{K}_{ac}$, the absolutely continuous part of $\mathscr{K}$. Let $E(x,y;\lambda)$ and $E_{ac}(x,y;\lambda)$ denote the kernels of $\mathcal E(\lambda)$ and $\mathcal E_{ac}(\lambda)$, respectively. Throughout the paper we use the notation $E'(x,y;\lambda)=\partial E(x,y;\lambda)/\partial\lambda$, and similarly for $E_{ac}$.
\begin{proposition}\label{prop: smoothness of res of id} \noindent \begin{enumerate} \item One has \begin{equation}\label{Epr decomp} E'(x,y;\lambda)=E_{ac}'(x,y;\lambda)+\sum_j P_j(x,y)\delta(\lambda-\lambda_j), \end{equation} where $P_j\in C^\infty\bigl((\mathring{J}\cup \mathring{E})\times (\mathring{J}\cup \mathring{E})\bigr)$ is the kernel of the projector onto the eigenspace corresponding to $\lambda_j$, and the sum is over all the distinct eigenvalues $\lambda_j$ of $\mathscr{K}$; \item $E_{ac}'(x,y;\lambda)$ is analytic in $(\mathring{J}\cup \mathring{E})\times (\mathring{J}\cup \mathring{E})\times ((-1,1)\setminus \{0\})$;
\item For any closed interval $I\subset \mathring{J}\cup \mathring{E}$, $n_1,n_2\ge 0$, and $\varepsilon>0$, one has \begin{equation}\label{resol-prop bdd}
\sup_{x,y\in I,\varepsilon<|\lambda|<1}\left|(d/dx)^{n_1}(d/dy)^{n_2}E_{ac}'(x,y;\lambda)\right|<\infty. \end{equation} \end{enumerate} \end{proposition}
\begin{proof} Let $\lambda_0$ be a pole of $R(x,y;\lambda)$ (recall that $R$ is defined in \eqref{res kernel}). It follows from \cite[Section 5.1]{BKT20} that $R(x,y;\lambda_\pm)=R_{\lambda_0}^\pm(x,y)/(\lambda-\lambda_0)+O(1)$ near $\lambda_0$. By the symmetry with respect to complex conjugation (see \cite[eq. (3.1)]{Howland1969} and \cite[Remark 4.8]{BKT20}), $\overline{R_{\lambda_0}^+(x,y)}=R_{\lambda_0}^-(x,y)$. Next we use the same argument as in \cite[Proof of Theorem 2]{Howland1969} to show that $R_{\lambda_0}^\pm(x,y)$ are real-valued and equal each other. We have \begin{equation} E'(x,y;\lambda)=-\frac1{2\pi i\lambda}\Delta_\lambda R(x,y;\lambda)=-\frac1{\pi\lambda}\frac{\Im R_{\lambda_0}^+}{\lambda-\lambda_0}+O(1),\ \lambda\to\lambda_0. \end{equation} Since $(\mathcal E'(\lambda)f,f)\ge0$ for any $f\in L^2(U)$ on each side of $\lambda_0$, this implies that $\Im R_{\lambda_0}^\pm(x,y)\equiv 0$. The residue of $R$ at the pole gives the corresponding projector (see \cite[Proof of Theorem 5.5]{BKT20}). Applying the same argument at all poles and using that $E(x,y;\lambda)$ is the kernel of the spectral family of $\mathscr{K}$ proves \eqref{Epr decomp}.
The fact that $E_{ac}'$ is analytic for any $\lambda\in(-1,1)\setminus\{0\}$ follows from \cite[eq. (4.20), proof of Lemma~4.14]{BKT20}. The latter asserts that $R(x,y;\lambda)$ is a meromorphic function of $\lambda$, $\Delta_\lambda R(x,y;\lambda)=R(x,y;\lambda_+)-R(x,y;\lambda_-)$, and the kernels $R(x,y;\lambda_+)$ and $R(x,y;\lambda_-)$ admit analytic continuation in $\lambda$ into the lower and upper half-planes, respectively. The analyticity in $x$ and $y$ follows from the identities of Proposition \ref{prop: DeltaAbsLambdaRJRE ids} and Remark~\ref{rem-anal-columns}.
To prove the Proposition it remains only to establish that the derivatives of $E_{ac}'(x,y;\lambda)$ with respect to $x$ and $y$ remain bounded as $\lambda\to\pm1$. To show this fact, introduce the function \begin{equation} \label{rhosurf} \rho(\lambda) =-\frac 1 2 + \frac 1{ i\pi} \ln \left(\frac {1 - \sqrt{1-\lambda^2}}\lambda \right), \qquad \lambda(\rho)=-\frac{1}{\sin(\pi\rho)}, \end{equation}
that provides a conformal
map between the slit $\lambda$-plane $\overline{{\mathbb C}}\setminus [-1,1]$ and the vertical strip $\Re \rho<\frac{1}{2}$. $\lambda(\rho)$ is the inverse function. The other strips $|\Re \rho-k|<\frac 1 2$, $k\in {\mathbb Z}$, in the $\rho$-plane are mapped to the same slit of the $\lambda$-plane by $\lambda(\rho)$ and represent the various sheets of the branched map $\lambda(\rho)$. Then the desired result is a consequence of the following facts, established
in \cite{BKT20}:
(1) $R(x,y;\lambda(\rho))$ is meromorphic in the $\rho\in\mathbb C$ plane,
and; (2) $\lambda=\pm1$ (i.e., $\rho=\pm1/2$) are not poles, see Figure 3, Theorem 4.7, and Lemma 4.14 in \cite{BKT20}. \end{proof}
\begin{corollary}\label{prop: deriv res of id} The kernel $E_{ac}'(x,y;\lambda)$, where $\lambda\in [-1,1]\setminus\Sigma$, is given by \begin{multline}\label{deriv res of id kernel}
E_{ac}'(x,y;\lambda)=\frac{-1}{2\pi i \lambda}\left(\chi_E(x)\chi_E(y)\Delta_\lambda R_{EE}(x,y;\lambda)+\chi_E(x)\chi_J(y)\Delta_\lambda R_{EJ}(x,y;\lambda) \right. \\
\left. +\chi_J(x)\chi_E(y)\Delta_\lambda R_{JE}(x,y;\lambda)+\chi_J(x)\chi_J(y)\Delta_{\lambda}R_{JJ}(x,y;\lambda)\right). \end{multline}
\end{corollary}
\begin{proof} By Proposition~\ref{prop: smoothness of res of id}, all we need to do to find $E_{ac}'(x,y;\lambda)$, $\lambda\in[-1,1]\setminus\Sigma$, is to compute the jump $\Delta_\lambda R(x,y;\lambda)$, and \eqref{jump-kerr}
completes the proof. \end{proof}
Let $\mathscr{R}_E(\lambda^2)$, $\mathscr{R}_{J}(\lambda^2)$ denote the resolvents of $AA^\dagger$, $A^\dagger A$, in the sense of \eqref{res of K}, respectively.
\begin{proposition}\label{prop:Ksq} We have the relations \begin{align}
\mathrm {Id}+\mathscr{R}_{E}(\lambda^2)&=\left(\mathrm {Id}-\frac{1}{\lambda^2}AA^\dagger\right)^{-1}=\mathrm {Id}+\pi_E\mathscr{R}(\lambda)\pi_E, \label{Res_E} \\
\mathrm {Id}+\mathscr{R}_{J}(\lambda^2)&=\left(\mathrm {Id}-\frac{1}{\lambda^2}A^{\dagger}A\right)^{-1}=\mathrm {Id}+\pi_J\mathscr{R}(\lambda)\pi_J, \label{Res_J} \end{align} where $\mathscr{R}$ is given by \eqref{res of K}, and $\pi_E:L^2(U)\to L^2(E)$, $\pi_J:L^2(U)\to L^2(J)$ are the natural projections. Moreover, the kernels of $\mathscr{R}_E$, $\mathscr{R}_J$, are given by $R_{EE}$, $R_{JJ}$, respectively, see \eqref{ker RE RJ}. \end{proposition}
\begin{proof} From \eqref{res of K} we have \begin{align}\label{res sum}
\mathscr{R}(\lambda)=\sum_{j=1}^\infty\left(\frac{\mathscr{K}}{\lambda}\right)^j. \end{align} Using the matrix form of $\mathscr{K}$ \eqref{K-def}, it is clear that the odd powers in \eqref{res sum} are block off-diagonal and the even powers in \eqref{res sum} are block diagonal. The results \eqref{Res_E}, \eqref{Res_J} follow by comparing the expansions of $(\mathrm {Id}-\lambda^{-2}AA^{\dagger})^{-1}$, $\pi_E\mathscr{R}(\lambda)\pi_E$ and $(\mathrm {Id}-\lambda^{-2}A^{\dagger}A)^{-1}$, $\pi_J\mathscr{R}(\lambda)\pi_J$. \end{proof}
Combining Theorem~\ref{thm:main} and Proposition~\ref{prop:Ksq} proves Corollary~\ref{cor:main}.
\section{Small $\lambda$ asymptotics of $\Gamma(z;\lambda)$}\label{sec: Gamma}
\begin{figure}
\caption{An example of the sets $J$ and $E$ with $n=3$ double points and $2g+2=6$ endpoints.}
\label{fig: J and E}
\end{figure}
We follow the steepest descent method of Deift-Zhou, which was first introduced in \cite{DZ93}. First, we remind the reader of the notation. The set $U=E\cup J$ consists of $g+1$ disjoint intervals. Endpoints of $J$ or $E$ which: \begin{itemize}
\item belong to both $J,E$ are called \textit{double} endpoints or just double points,
\item belong to only $J$ or only $E$ are called \textit{simple} endpoints or just endpoints. \end{itemize} We let $a_1,a_2,\ldots,a_{2g+2}\in\mathbb{R}$, $g\in\{0\}\cup\mathbb{N}$, satisfying $a_1<a_2<\cdots<a_{2g+2}$ denote the simple endpoints of $J$ and $E$; they are the endpoints of $g+1$ disjoint intervals. We let $b_1,b_2,\ldots,b_n\in\mathbb{R}$, $n\in\mathbb{N}$, satisfying $b_1<b_2<\cdots<b_n$, denote the double endpoints of $J$ and $E$; they lie in the interior of the above mentioned $g+1$ disjoint intervals. See Figure \ref{fig: J and E} for an example.
\subsection{Transformations $\Gamma\to Y\to Z$}
To begin, we seek the ${\mathfrak g}$-function ${\mathfrak g}(z)$ which has the following properties: \begin{itemize} \item The function ${\mathfrak g}(z)$ satisfies the jump conditions \begin{align}
{\mathfrak g}(z_+)+{\mathfrak g}(z_-)&=-1, & & z\in \mathring{J}, \label{gJJump} \\
{\mathfrak g}(z_+)+{\mathfrak g}(z_-)&=1, & & z\in \mathring{E}, \label{gEJump} \\
{\mathfrak g}(z_+)-{\mathfrak g}(z_-)&=i\Omega_j, & & z\in(a_{2j},a_{2j+1}), ~ j=1,\dots,g, \label{gGapJump} \end{align} where the constants $\Omega_j$ are to be determined. \item ${\mathfrak g}(z)$ is analytic on $\overline{\mathbb{C}}\setminus U$, \item ${\mathfrak g}(z)\in L^2_{loc}(U)$. \end{itemize} We introduce the characteristic function $\chi$ and radical $R:\mathbb{C}\setminus(J\cup E)\to\mathbb{C}$ \begin{equation} \chi(z)= \left\{ \begin{array}{cc} 1, & \text{if $z\in \mathring{E}$} \\ -1, &\text{if $z\in \mathring{J}$} \end{array} \right. , \qquad R(\zeta)=\prod_{j=1}^{2g+2}(\zeta-a_j)^{\frac{1}{2}}, \label{radical} \end{equation} with the branch of $R$ satisfying $R(z)\sim z^{g+1}$ as $z\rightarrow \infty$. \begin{proposition}\label{prop: g-function} The ${\mathfrak g}$-function is given by \begin{equation}\label{gg} {\mathfrak g}(z)=\frac{R(z)}{2\pi i}\left(\int_{U}\frac{\chi(\zeta) d\zeta}{(\zeta-z)R_+(\zeta)}+ \sum_{j=1}^{g}\int_{a_{2j}}^{a_{2j+1}}\frac{ {i\Omega_j} d\zeta}{(\zeta-z)R(\zeta)}\right), \end{equation} where the constants $\Omega_j\in\mathbb{R}$ are (uniquely) chosen so that ${\mathfrak g}(z)$ is analytic at $z=\infty$. Moreover, \begin{enumerate}
\item ${\mathfrak g}(z)$ is Schwarz symmetric (i.e. ${\mathfrak g}(z)=\overline{{\mathfrak g}(\overline{z})}$),
\item $|\Re[{\mathfrak g}(z)]|<\frac{1}{2}$ for $z\in\overline{\mathbb{C}}\setminus U$,
\item $\Re[{\mathfrak g}(z_\pm)]=\frac{1}{2}$ and $\Im[{\mathfrak g}'(z_{+})]>0$ for $z\in \mathring{E}$,
\item $\Re[{\mathfrak g}(z_\pm)]=-\frac{1}{2}$ and $\Im[{\mathfrak g}'(z_{+})]<0$ for $z\in \mathring{J}$. \end{enumerate} \end{proposition}
\begin{proof} The formula \eqref{gg} follows from the Sokhotski-Plemelj formula. The Schwarz symmetry can now easily be verified via \eqref{gg} because $R(z)$ is Schwarz symmetric. Using the Schwarz symmetry, we have \begin{align*}
{\mathfrak g}(z_+)=\overline{{\mathfrak g}(\overline{z_+})}=\overline{{\mathfrak g}(z_-)}, \qquad z\in\mathring{J}\cup\mathring{E} \end{align*} which immediately implies that \begin{align*}
\Re[{\mathfrak g}(z_+)]=\Re[{\mathfrak g}(z_-)], \qquad \Im[{\mathfrak g}(z_+)]=-\Im[{\mathfrak g}(z_-)], \qquad z\in\mathring{J}\cup\mathring{E}. \end{align*} It now follows from \eqref{gJJump}, \eqref{gEJump} that $\Re[{\mathfrak g}(z_\pm)]=-\frac{1}{2}$ for $z\in \mathring{J}$ and $\Re[{\mathfrak g}(z_\pm)]=\frac{1}{2}$ for $z\in \mathring{E}$. Since $\Re[{\mathfrak g}(z)]$ is harmonic on $\overline{\mathbb{C}}\setminus(J\cup E)$, its maximum and minimum value can only be attained on $J\cup E$. Lastly, consider $z=x+iy\in\mathring{E}$: since $\Re[{\mathfrak g}(z_\pm)]=\frac{1}{2}$ is maximized, we have $\frac{\partial}{\partial y}\Re[{\mathfrak g}(z_\pm)]<0$. Thus, according to the Cauchy-Riemann equations, $0<\frac{\partial}{\partial x}\Im[{\mathfrak g}(z_+)]=\Im[{\mathfrak g}'(z_{+})]$, as desired. The same idea can be applied when $z\in\mathring{J}$ to obtain $\Im[{\mathfrak g}'(z_{+})]<0$. \end{proof}
\begin{figure}\label{fig: kappa}
\end{figure}
We introduce the important change of spectral variable given by \begin{align}\label{def: kappa}
\varkappa=\varkappa(\lambda):=-\ln\frac{\lambda}{2}, \end{align}
where the standard branch of the logarithm is taken (i.e. $|\Im\varkappa|\leq\pi$). See Figure \ref{fig: kappa} for the image of a ball under this map. Using this new spectral variable and the ${\mathfrak g}$-function, we use the transformation \begin{equation}\label{y-def} Y(z;\varkappa)=e^{-\varkappa {\mathfrak g}(\infty)\sigma_3}\Gamma(z;2e^{-\varkappa}) e^{\varkappa {\mathfrak g}(z)\sigma_3}, \end{equation} which reduces RHP \ref{RHPGamma} to the following RHP.
\begin{problem}\label{RHPY} Find a $2\times 2$ matrix-function $Y(z;\varkappa)$ which, for any {fixed} $\lambda\in\mathbb{C}\setminus[-1,1]$, satisfies: \begin{enumerate} \item $Y(z;\varkappa)$ is analytic on $\overline{{\mathbb C}}\setminus[a_1,a_{2g+2}]$, \item $Y(z;\varkappa)$ satisfies the jump conditions \begin{eqnarray}\label{jumpY} \begin{split} Y(z_+;\varkappa)& =Y(z_-;\varkappa) \begin{bmatrix}
e^{\varkappa ({\mathfrak g}(z_+) -{\mathfrak g}(z_-)) } & 0 \\ {i} e^{\varkappa( {\mathfrak g}(z_+) +{\mathfrak g}(z_-) +1) } & e^{-\varkappa( {\mathfrak g}(z_+) -{\mathfrak g}(z_-)) } \end{bmatrix}, & & z\in \mathring{J},
\\ Y(z_+;\varkappa)&=Y(z_-;\varkappa)\begin{bmatrix} e^{\varkappa ({\mathfrak g}(z_+) -{\mathfrak g} _-(z)) } &
-i e^{-\varkappa(\mathcal {\mathfrak g}(z_+) + {\mathfrak g}(z_-) -1) } \\ 0 & e^{-\varkappa({\mathfrak g}(z_+) -{\mathfrak g}(z_-)) }\end{bmatrix}, & & z\in \mathring{E},
\\
Y(z_+;\varkappa)&=Y(z_-;\varkappa)e^{i\varkappa\Omega_j \sigma_3}, & & z\in(a_{2j},a_{2j+1}),~ j =1,\dots, g, \end{split} \end{eqnarray} \item non-tangential boundary values of $Y(z;\varkappa)$ from the upper/lower half-planes belong to $L^2_{loc}(U)$, \item $Y(z;\varkappa)={\bf 1}+\BigO{z^{-1}}$ as $z\rightarrow\infty$. \end{enumerate} \end{problem} Notice that the jumps of $Y(z;\varkappa)$ on $\mathring{J}, \mathring{E}$ can be factorized as \begin{align} \begin{split}
\begin{bmatrix}
e^{\varkappa ({\mathfrak g}(z_+) -{\mathfrak g}(z_-)) } & 0 \\ {i} e^{\varkappa( {\mathfrak g}(z_+) +{\mathfrak g}(z_-) +1) } & e^{-\varkappa( {\mathfrak g}(z_+) -{\mathfrak g}(z_-)) } \end{bmatrix}&=\begin{bmatrix} 1 & -ie^{-\varkappa(2{\mathfrak g}(z_-)+1)} \\ 0 & 1 \end{bmatrix}(i\sigma_1)\begin{bmatrix} 1 & -ie^{-\varkappa(2{\mathfrak g}(z_+)+1)} \\ 0 & 1 \end{bmatrix}, \\
\begin{bmatrix} e^{\varkappa ({\mathfrak g}(z_+) -{\mathfrak g} _-(z)) } &
- i e^{-\varkappa(\mathcal {\mathfrak g}(z_+) + {\mathfrak g}(z_-) -1) } \\ 0 & e^{-\varkappa({\mathfrak g}(z_+) -{\mathfrak g}(z_-)) }\end{bmatrix}&=\begin{bmatrix} 1 & 0 \\ ie^{\varkappa(2{\mathfrak g}(z_-)-1)} & 1 \end{bmatrix}(-i\sigma_1)\begin{bmatrix} 1 & 0 \\ ie^{\varkappa(2{\mathfrak g}(z_+)-1)} & 1 \end{bmatrix}, \end{split} \end{align} respectively, see \eqref{gJJump}, \eqref{gEJump}. We now follow the standard procedure of opening lenses around each subinterval of $J,E$. We call these sets ${\mathcal L_{E,J}^{(\pm)}}$, which are the regions inside the lenses around intervals $E,J$ and in the upper or lower half planes, respectively, see Figure \ref{fig: lenses}. Now introduce the new unknown matrix \begin{eqnarray} Z(z;\varkappa) = \left\{\begin{array}{ll}
Y(z;\varkappa), & \text{ outside the lenses,} \\ \displaystyle Y(z;\varkappa) \left[\begin{matrix} 1 & 0\\ \mp {i}{ {\rm e}^{ \varkappa (2{\mathfrak g}(z) -1)}} & 1 \end{matrix}\right], & z\in {\mathcal L_E^{(\pm)}},\\ \displaystyle Y(z;\varkappa) \left[\begin{matrix} 1 & {\pm i} { {\rm e}^{ -\varkappa (2{\mathfrak g}(z) +1)}} \\ 0& 1 \end{matrix}\right], & z\in \mathcal L_J^{(\pm)}.
\end{array}
\right.
\label{mat-Z} \end{eqnarray} We verify that $Z(z;\varkappa)$ is the solution to the following RHP:
\begin{figure}
\caption{The lenses $\mathcal{L}_{J,E}^{(\pm)}$ with $g=1$ and $n=2$ double points.}
\label{fig: lenses}
\end{figure}
\begin{problem}\label{RHPZ} Find a $2\times2$ matrix-function $Z(z;\varkappa)$ which, for any {fixed} $\lambda\in\mathbb{C}\setminus[-1,1]$, satisfies: \begin{enumerate} \item $Z(z;\varkappa)$ is analytic in $\overline{\mathbb{C}}\setminus\left([a_1,a_{2g+2}]\cup\partial\mathcal{L}_J^{(\pm)}\cup\partial\mathcal{L}_E^{(\pm)}\right)$, \item $Z(z;\varkappa)$ satisfies the jump conditions \begin{eqnarray}\label{Z jump} \begin{split}
Z(z_+;\varkappa)&=Z(z_-;\varkappa)(i\sigma_1), & & z\in \mathring{J}, \\
Z(z_+;\varkappa)&=Z(z_-;\varkappa)(-i\sigma_1), & & z\in \mathring{E}, \\
Z(z_+;\varkappa)&=Z(z_-;\varkappa)e^{i\varkappa\Omega_j\sigma_3}, & & z\in(a_{2j},a_{2j+1}), ~ j=1,\ldots,g, \\
Z(z_+;\varkappa)&=Z(z_-;\varkappa)\begin{bmatrix} 1 & 0 \\ ie^{\varkappa(2{\mathfrak g}(z)-1)} & 1 \end{bmatrix}, & & z\in \partial\mathcal{L}_E^{(\pm)}, \\
Z(z_+;\varkappa)&=Z(z_-;\varkappa)\begin{bmatrix} 1 & -ie^{-\varkappa(2{\mathfrak g}(z)+1)} \\ 0 & 1 \end{bmatrix}, & & z\in \partial\mathcal{L}_J^{(\pm)}, \end{split} \end{eqnarray} \item non-tangential boundary values of $Z(z;\varkappa)$ from the upper/lower half-planes belong to $L^2_{loc}(U)$, \item $Z(z;\varkappa)={\bf 1}+\BigO{z^{-1}}$ as $z\rightarrow\infty$. \end{enumerate} \end{problem} Notice that the jumps of $Z(z;\varkappa)$ on $\partial\mathcal{L}_{E,J}^{(\pm)}$ are exponentially small provided that $z$ is a fixed distance from any endpoint of $J$ and $E$. The large $\varkappa$ approximation of $Z(z;\varkappa)$ outside small discs around all endpoints (i.e. ignoring the jumps on $\partial\mathcal{L}_{E,J}^{(\pm)}$) is given by the outer parametrix (solution of the model RHP), which we denote $\Psi(z;\varkappa)$ and is constructed in the next Subsection. The large $\varkappa$ approximation of $Z(z;\varkappa)$ near the endpoints is described by local parametrices which are constructed in Subsection \ref{subsec: local parametrices}.
\subsection{The solution of the model problem $\Psi$}\label{ssect-sol_mod_gen}
We obtain the model RHP by ignoring the jumps of RHP \ref{RHPZ} for $z\in\partial\mathcal{L}_{E,J}^{(\pm)}$ and by prescribing certain endpoint behavior when $z=a_j$, $j=1,\ldots,2g+2$, and $z=b_k$, $k=1,\ldots,n$.
\begin{problem}\label{modelRHP} Find a $2\times2$ matrix-function $\Psi(z;\varkappa)$ which satisfies: \begin{enumerate}
\item $\Psi(z;\varkappa)$ is analytic on $\overline{\mathbb{C}}\setminus[a_1,a_{2g+2}]$,
\item $\Psi(z;\varkappa)$ has the jump conditions
\begin{eqnarray}\label{jumpPsi}
\begin{split}
& \Psi(z_+;\varkappa)=\Psi(z_-;\varkappa) (-i\sigma_1), & & z\in \mathring{E}, \\
& \Psi(z_+;\varkappa)=\Psi(z_-;\varkappa)(i\sigma_1), & & z\in \mathring{J}, \\
& \Psi(z_+;\varkappa)=\Psi(z_-;\varkappa)e^{i\varkappa\Omega_j\sigma_3}, & & z\in (a_{2j},a_{2j+1}), ~ j=1,\ldots,g,
\end{split}
\end{eqnarray}
\item $\Psi(z;\varkappa)={\bf 1}+\BigO{z^{-1}}$ as $z\to\infty$,
\item $\Psi(z;\varkappa)=\BigO{|z-a_j|^{-\frac{1}{4}}}$ as $z\to a_j$, $j=1,\ldots,2g+2$,
\item $\Psi(z;\varkappa)=\BigO{|z-b_k|^{-\frac{1}{2}}}$ as $z\to b_k$, $k=1,\ldots,n$,
\item $\Psi(z_\pm;\varkappa)$ has $L^2$ behavior for $z\in U\setminus\{b_1,\ldots,b_n\}$. \end{enumerate} \end{problem}
\begin{figure}
\caption{The sets $\tilde{J}$ and $\tilde{E}$ corresponding to the setup in Figure \ref{fig: J and E}.}
\label{fig: tilde J and tilde E}
\end{figure}
Note that solution of the RHP \ref{modelRHP} is not unique because $\Psi(z)\not\in L_{loc}^2$ near any double point $b_k$. To properly fix a solution to RHP \ref{modelRHP}, we introduce the auxiliary $\tilde{\mathfrak g}$-function and another model RHP. First, we fix $\epsilon>0$ such that \begin{align}
\epsilon<\min_{\substack{j=1,\ldots,g+1 \\ k=1,\ldots,n}}|a_{2j-1}-b_k|, \end{align} define the sets \begin{align}
K_J=\left\{j\in\{1,\ldots,g+1\}:[a_{2j-1},a_{2j-1}+\epsilon]\subset J\right\}, \cr
K_E=\left\{j\in\{1,\ldots,g+1\}:[a_{2j-1},a_{2j-1}+\epsilon]\subset E\right\}, \end{align} and define the intervals \begin{align}
\tilde{J}=\bigcup_{j\in K_J}[a_{2j-1},a_{2j}], \qquad \tilde{E}=\bigcup_{j\in K_E}[a_{2j-1},a_{2j}]. \end{align} See Figure \ref{fig: tilde J and tilde E} for an example of the sets $\tilde{J}$, $\tilde{E}$ corresponding to the sets $J$, $E$ as described in Figure \ref{fig: J and E}. Notice that $\tilde{J}\cup\tilde{E}=J\cup E$ but $\tilde{J},\tilde{E}$ have no double points, i.e. $\tilde{J}\cap\tilde{E}=\emptyset$. If $n=0$, then $J,E$ have no double points so $\tilde{J}=J$ and $\tilde{E}=E$. If $g=0$, then $\tilde{J}=[a_1,a_2]$ and $\tilde{E}=\emptyset$ if $[a_1,b_1]\subset J$ or $\tilde{E}=[a_1,a_2]$ and $\tilde{J}=\emptyset$ if $[a_1,b_1]\subset E$. We now define the $\tilde{\mathfrak g}$-function to be the solution of the following scalar RHP: \begin{itemize}\label{RHPtgg} \item $\tilde{\mathfrak g}(z;\varkappa)$ satisfies the following jump conditions: \begin{align}
\tilde{\mathfrak g}(z_+;\varkappa)+\tilde{\mathfrak g}(z_-;\varkappa)&=\begin{cases}
0, & z\in (\mathring{E}\cap\mathring{\tilde{E}})\cup(\mathring{J}\cap\mathring{\tilde{J}}), \\
i\pi\cdot\text{sgn}\Im\varkappa, & z\in \mathring{E}\cap\mathring{\tilde{J}}, \\
-i\pi\cdot\text{sgn}\Im\varkappa, & z\in \mathring{J}\cap\mathring{\tilde{E}},
\end{cases} \label{tggJumps1} \\
\tilde{\mathfrak g}(z_+;\varkappa)-\tilde{\mathfrak g}(z_-;\varkappa)&=W_j, ~~~ z\in(a_{2j},a_{2j+1}), ~ j=1,\dots,g, \label{tggJumps2} \end{align} where the constants $W_j$ are to be determined. \item $\tilde{\mathfrak g}(z;\varkappa)$ is analytic on $\overline{{\mathbb C}}\setminus{[a_1,a_{2g+2}]}$ and is an $L^2_{loc}$ function on the jump contour. \end{itemize} If $\varkappa\in\mathbb{R}$, then $\mathrm{sgn}\Im\varkappa$ is to be understood in the sense of continuation from the upper or lower half plane. It can be verified that \begin{equation}\label{tgg} \tilde{\mathfrak g}(z;\varkappa)=\frac{R(z)}{2\pi i}\left(\int_{J\cap\tilde{E}}\frac{ -i\pi\cdot\text{sgn}\Im\varkappa}{(\zeta-z)R_+(\zeta)}d\zeta+\int_{\tilde{J}\cap E}\frac{i\pi\cdot\text{sgn}\Im\varkappa }{(\zeta-z)R_+(\zeta)}d\zeta+ \sum_{j=1}^{g}\int_{a_{2j}}^{a_{2j+1}}\frac{ {W_j} d\zeta}{(\zeta-z)R(\zeta)}\right), \end{equation} where $R(z)$ was defined in \eqref{radical} and the constants $W_j\in{\mathbb R}$ are (uniquely) chosen (in the standard way), so that $\tilde{\mathfrak g}(z;\varkappa)$ is analytic at $z=\infty$. According to \eqref{tgg}, \begin{equation}\label{tgg-loc} \tilde{\mathfrak g}(z;\varkappa) = \pm \frac{1}{2}\text{sgn}(\Im z)\text{sgn}(\Im\varkappa)\ln(z-b_k)+\BigO{1}, \qquad z\to b_k, \end{equation} where the sign `$-$' if we have $J$ to the left of $b_k$ and $E$ to the right and the sign is `$+$' in the opposite case.
We define \begin{equation}\label{Psi_tilde} \tilde \Psi(z;\varkappa)= e^{-\tilde{\mathfrak g}(\infty;\varkappa)\sigma_3} \Psi(z;\varkappa) e^{\tilde{\mathfrak g}(z;\varkappa)\sigma_3}, \end{equation} which is the solution to the following RHP:
\begin{problem} \label{modelRHPtilde} Find a $2\times2$ matrix-function $\tilde\Psi(z;\varkappa)$ which satisfies: \begin{enumerate}
\item $\tilde{\Psi}(z;\varkappa)$ is analytic on $\overline{\mathbb{C}}\setminus[a_1,a_{2g+2}]$,
\item $\tilde{\Psi}(z;\varkappa)$ has the jump conditions
\begin{eqnarray}
\begin{split}
&\tilde{\Psi}(z_+;\varkappa)=\tilde{\Psi}(z_-;\varkappa)(-i\sigma_1), & & z\in\mathring{\tilde{E}}, \\
&\tilde{\Psi}(z_+;\varkappa)=\tilde{\Psi}(z_-;\varkappa)(i\sigma_1), & & z\in\mathring{\tilde{J}}, \\
&\tilde{\Psi}(z_+;\varkappa)=\tilde{\Psi}(z_-;\varkappa)e^{(i\varkappa\Omega_j+W_j)\sigma_3}, & & z\in(a_{2j},a_{2j+1}), ~ j=1,\ldots,g,
\end{split}
\end{eqnarray}
\item $\tilde{\Psi}(z;\varkappa)={\bf 1}+\BigO{z^{-1}}$ as $z\to\infty$,
\item $\tilde{\Psi}(z;\varkappa)=\BigO{|z-a_j|^{-\frac{1}{4}}}$ as $z\to a_j$, $j=1,2,\ldots,2g+2$. \end{enumerate} \end{problem}
\begin{remark}\label{rem-poles-in-kappa} As is well known \cite{DIZ97}, a solution of RHP \ref{modelRHPtilde}, if one exists, is unique and can be expressed in terms of Riemann $\Theta$-functions. However, there can be (depending on the configuration of $\tilde{E}, \tilde{J}$) a countable, nowhere dense set $\{\varkappa_{j}+i\pi k\}_{j,k\in\mathbb{Z}}$, $\varkappa_j\in\mathbb{R}$, of values of $\varkappa$, for which RHP \ref{modelRHPtilde} has no solution ($\tilde{\Psi}(z;\varkappa)$ has poles at $\varkappa=\varkappa_{j}+i\pi k$). Let us discuss a few scenarios. Suppose \begin{itemize}
\item $\tilde{E}=[a_1,a_2]\cup[a_{2g+1},a_{2g+2}]$ and $\tilde{J}=[a_3,a_4]\cup\cdots\cup[a_{2g-1},a_{2g}]$, $g\geq2$; the asymptotics of $\varkappa_j$, together with the asymptotics of the corresponding eigenvectors of the associated operator $\mathscr{K}$ from \eqref{operator K}, was established in \cite[eq. (7.19), (7.30)]{BKT16}.
\item $\tilde{E}=[a_1,a_2]$ and $\tilde{J}=[a_3,a_4]$ (i.e. $g=1$); it can be shown that
\begin{align*}
\varkappa_j=\frac{i\pi}{\tau}\left(j-\frac{1}{2}\right), \qquad \mathrm{where} ~ \tau=i\frac{\int_{a_3}^{a_4}\frac{ds}{|R(s)|}}{\int_{a_2}^{a_3}\frac{ds}{|R(s)|}}\in i\mathbb{R_+}.
\end{align*}
This is possible {to obtain} because the zeros of the Jacobi $\Theta$-function (i.e. Riemann $\Theta$-function for $g=1$) are explicitly known \cite[eq. 8.189.4]{GRtable}.
\item $\tilde{E}=[a_1,a_2]$ and $\tilde{J}=\emptyset$ (this is the scenario of Section \ref{sec:jump}); there is no set of `bad' $\varkappa_j$ because RHP \ref{modelRHPtilde} is independent of $\varkappa$ and its solution is given by $\tilde \Psi(z)=\left(\frac{z-a_1}{z-a_2}\right)^{\frac{\sigma_1}{4}}$. \end{itemize} \end{remark}
\begin{lemma} \label{lem-mod-sol} There exists a solution $\Psi(z;\varkappa)$ to RHP \ref{modelRHP} such that the matrix function $\tilde\Psi(z;\varkappa)$ given by \eqref{Psi_tilde} solves RHP \ref{modelRHPtilde}. \end{lemma}
\begin{proof} Analyticity of $ \tilde\Psi(z;\varkappa)$ and its asymptotics at $z=\infty$ are clear. We only need to check the jump conditions, which are easily verified via \eqref{jumpPsi} and the jump conditions for $\tilde{\mathfrak g}(z)$. \end{proof} We fix the solution to the model RHP \ref{modelRHP} as \begin{equation}\label{Psi_tilde_1} \Psi(z;\varkappa)= e^{\tilde{\mathfrak g}(\infty;\varkappa)\sigma_3} \tilde \Psi(z;\varkappa) e^{-\tilde{\mathfrak g}(z;\varkappa)\sigma_3}. \end{equation}
\begin{remark}\label{rem-anal-param} Note that $\det \Psi\equiv 1$ and $\tilde \Psi(z;\varkappa)$ is analytic (and invertible) near $z=b_k$ on any shore of the cut $E\cup J$. Therefore, the RHP \ref{modelRHPtilde} has a unique solution $\tilde \Psi(z;\varkappa)$, that, in general, can be constructed in terms of Riemann $\Theta$-functions. \end{remark}
\subsection{Local parametrices}\label{subsec: local parametrices}
Construction of the local parametrices at the endpoints $a_j$, $j=1,\dots,2g+2$ is essentially the same as in \cite[Section 4.3]{BKT16}. Here we consider a parametrix at $b_k\in(a_{2j-1}, a_{2j})$. Let us define the approximate solution to RHP \ref{RHPZ} as \begin{align}
\widehat{Z}(z;\varkappa)=\begin{cases}
\tilde{\mathcal{P}}_j(z;\varkappa), & z\in\mathcal{D}_{a_j}, ~~~ j=1,\ldots,2g+2, \\
\mathcal{P}_k(z;\varkappa), & z\in\mathcal{D}_{b_k}, ~~~ k=1,\ldots,n, \\
\Psi(z;\varkappa), & \mbox{elsewhere},
\end{cases}\label{hatZ explicit} \end{align} where $\Psi(z;\varkappa)$ a the solution of RHP \ref{modelRHP} given by \eqref{Psi_tilde_1}, and $\tilde{\mathcal{P}}_j, \mathcal{P}_k$ are the parametrices. The parametrices $\tilde{\mathcal{P}}_j$ are given by the standard Bessel type parametrix (see \cite[eq. 4.29]{BKT16}) so we omit the details. However, the construction of the parametrices $\mathcal{P}_k$, in terms of hypergeometric functions, is new. First, we define the local coordinate $\zeta_k(z)$ by relating the ${\mathfrak g}$-function and the 4-point ${\mathfrak g}_4$-function, see \eqref{g4}. Explicitly, \begin{align}\label{g_4=gg}
{\mathfrak g}_4(\zeta_k(z))=\begin{cases}
{\mathfrak g}(z), & \text{for}~(b_k-\delta,b_k)\subset E, \\
-{\mathfrak g}(z), & \text{for}~(b_k-\delta,b_k)\subset J,
\end{cases} \end{align} where $\delta>0$ is sufficiently small. The following Proposition guarantees that such a $\zeta_k(z)$ exists. \begin{proposition}\label{prop: zeta} Let functions $\widehat \phi(z), \widehat \psi(z)$ be analytic in a disc ${\mathcal{D}}$ centered at the origin and let $\phi(z)=\frac{\ln z}{i\pi}+\widehat \phi(z)$, $\psi(z)=\frac{\ln z}{i\pi}+\widehat \psi(z)$. Then, there exists a function $\zeta(z)=cz(1+y(z))$ analytic in $\tilde{\mathcal{D}}$, a disk centered at $z=0$ which is a subset of $\mathcal{D}$, where $c\neq0$ and $y(0)=0$, such that \begin{equation}\label{near1} \phi(\zeta(z))=\psi(z). \end{equation} \end{proposition}
\begin{proof} Substituting $\zeta(z)$ in \eqref{near1} and taking $c=e^{i\pi(\widehat \psi(0)-\widehat\phi(0))}$, we obtain the equation \begin{equation} F(z,y)=\frac{1}{i\pi} \ln(1+y)+\widehat \phi(cz(1+y))-\widehat \psi(z) -\widehat \psi(0)+\widehat\phi(0)=0, \end{equation} which is true for $(z,y)=(0,0)$. Since \begin{equation} \frac{\partial F}{\partial y}=\frac{1}{i\pi(1+y)}+cz\widehat \phi'(cz(1+y))\neq 0 \end{equation} at $(z,y)=(0,0)$, the conclusion follows from the Implicit Function Theorem. \end{proof} We denote by $\Gamma_4(z;\lambda)$ the solution of RHP \ref{RHPGamma} in the scenario with 1 double point $b_1=0$, $E=[-a,0]$ and $J=[0,a]$, where $a>0$. It was shown in \cite[eq. 17]{BBKT19} that $\Gamma_4(z;\lambda)$ can be explicitly expressed in terms of hypergeometric functions. We also let $\Psi_4(z;\varkappa)$, $\tilde{\mathfrak g}_4(\zeta_k)$ (see \eqref{Psi_4}, \eqref{tg4}, respectively) denote the solution of RHP \ref{modelRHP} in the same scenario. When $(b_k-\delta,b_k)\subset E$, the parametrices $\mathcal{P}_{k}$ are given by
\begin{figure}
\caption{Two possible configurations of the set $\mathcal{D}_{b_k}$ and the regions $I_{k\pm},\ldots,VI_{k\pm}$.}
\label{fig: para}
\end{figure}
\begin{equation}\label{ParamIeLeft} \mathcal {P}_k(z;\varkappa)= \begin{cases} \Psi(z;\varkappa)\Psi^{-1}_4(\zeta_k;\varkappa)\Gamma_4(\zeta_k;2e^{-\varkappa})e^{\varkappa {\mathfrak g}_4(\zeta_k)\sigma_3}\begin{bmatrix} 1 & \pm ie^{-\varkappa(2{\mathfrak g}_4(\zeta_k)+1)} \\ 0 & 1 \end{bmatrix}, &\text{for $z\in I_{k\pm}$}, \\ \Psi(z;\varkappa)\Psi^{-1}_4(\zeta_k;\varkappa)\Gamma_4(\zeta_k;2e^{-\varkappa})e^{\varkappa{\mathfrak g}_4(\zeta_k)\sigma_3}, &\text{for $z\in II_{k\pm}$}, \\ \Psi(z;\varkappa)\Psi^{-1}_4(\zeta_k;\varkappa)\Gamma_4(\zeta_k;2e^{-\varkappa})e^{\varkappa{\mathfrak g}_4(\zeta_k)\sigma_3}\begin{bmatrix} 1 & 0 \\ \mp ie^{\varkappa(2{\mathfrak g}_4(\zeta_k)-1)} & 1 \end{bmatrix}, &\text{for $z\in III_{k\pm}$}, \end{cases} \end{equation} where the regions $I_{k\pm},II_{k\pm},III_{k\pm}$ are described in Figure \ref{fig: para}, right panel. When $(b_k-\delta,b_k)\subset J$, the parametrices $\mathcal{P}_{k}$ are given by \begin{equation}\label{ParamIeRight} \mathcal {P}_k(z;\varkappa)= \begin{cases} \Psi(z;\varkappa)\sigma_2\Psi^{-1}_4(\zeta_k;\varkappa)\Gamma_4(\zeta_k;2e^{-\varkappa})e^{\varkappa{\mathfrak g}_4(\zeta_k)\sigma_3}\begin{bmatrix} 1 & 0 \\ \mp ie^{\varkappa(2{\mathfrak g}_4(\zeta_k)-1)} & 1 \end{bmatrix}\sigma_2, &\text{for $z\in VI_{k\pm}$}, \\ \Psi(z;\varkappa)\sigma_2\Psi^{-1}_4(\zeta_k;\varkappa)\Gamma_4(\zeta_k;2e^{-\varkappa})e^{\varkappa{\mathfrak g}_4(\zeta_k)\sigma_3}\sigma_2, &\text{for $z\in V_{k\pm}$}, \\ \Psi(z;\varkappa)\sigma_2\Psi^{-1}_4(\zeta_k;\varkappa)\Gamma_4(\zeta_k;2e^{-\varkappa})e^{\varkappa{\mathfrak g}_4(\zeta_k)\sigma_3}\begin{bmatrix} 1 & \pm ie^{-\varkappa(2{\mathfrak g}_4(\zeta_k)+1)} \\ 0 & 1 \end{bmatrix}\sigma_2, &\text{for $z\in IV_{k\pm}$}, \end{cases} \end{equation} where the regions $IV_{k\pm},V_{k\pm},VI_{k\pm}$ are described in Figure \ref{fig: para}, left panel.
\begin{remark}\label{rem: S_e} For a given $\epsilon>0$ we define $\mathcal{S}_\epsilon\subset {\mathbb C}$ as \begin{align}
\mathcal{S}_\epsilon:=\left\{\varkappa\in\mathbb{C}:|\Im\varkappa|\leq\pi, \Re\varkappa\geq\ln(2), \text{ and } ||\Psi(z;\varkappa)^{\pm1}||<\frac{N(z)}{\epsilon}\right\}, \end{align}
where $\Psi(z;\varkappa)$ is the solution of RHP \ref{modelRHP}, given by \eqref{Psi_tilde_1}, $\displaystyle ||\Psi||:=\max_{i,j\in\{1,2\}}|\Psi_{i,j}|$ and $K(z)$ is independent of $\varkappa$. The image of $\mathcal{S}_\epsilon$ under the map $\lambda=2e^{-\varkappa}$ is a subset of the unit ball $|\lambda|\leq1$, see Figure \ref{fig: kappa}. In the case where $\tilde{E}=[a_1,a_2]\cup[a_{2g+1},a_{2g+2}], \tilde{J}=[a_3,a_4]\cup\cdots\cup[a_{2g-1},a_{2g}]$, $g\geq2$, studied in \cite{BKT16}, see also Remark \ref{rem-poles-in-kappa}, the set $\mathcal{S}_\epsilon$ with a sufficiently small $\epsilon$ is the horizontal strip $|\Im\varkappa|\leq\pi$ without small closed neighborhood surrounding each pole $\varkappa_j, \varkappa_j\pm i\pi$ of the solution $\tilde\Psi(z;\varkappa)$ of RHP \ref{modelRHPtilde}. A precise description of the set $\mathcal{S}_\epsilon$ for a general configuration of the intervals $\tilde{E}, \tilde{J}$ would require further study. This study is not carried out in this paper since in Sections \ref{sec:jump}-\ref{sec: exact to approx} below we are interested only in the one interval case, for which $\mathcal{S}_\epsilon=\{\varkappa\in\mathbb{C}:|\Im\varkappa|\leq\pi, \Re\varkappa\geq\ln(2)\}$, see Remark \ref{rem-poles-in-kappa}. For these reasons, we assume throughout the remainder of this Section that $\mathcal{S}_\epsilon$ is non-empty. \end{remark}
We conclude this subsection by demonstrating the key properties of the parametrices.
\begin{lemma}\label{lem-param} Let $\varkappa\in\mathcal{S}_\epsilon$. The parametrix $\mathcal{P}_{k}(z;\varkappa)$ satisfies the following properties: \begin{enumerate}
\item $Z(z;\varkappa)\mathcal {P}_k^{-1}(z;\varkappa)$ is single-valued for $z\in\mathcal{D}_{b_k}$,
\item $\mathcal {P}_k(z;\varkappa)\in L^2_{loc}$ on the jump contours $(\partial\mathcal{L}_J^{(\pm)}\cup\partial\mathcal{L}_E^{(\pm)}\cup E\cup J)\cap\mathcal{D}_{b_k}$ {provided that $\Re\varkappa\notin[\ln(2),\infty)$ (if and only if $\lambda\notin[-1,1]$)},
\item for $z\in\partial\mathcal{D}_{b_k}$, we have the uniform approximation $\Psi(z;\varkappa)\mathcal {P}_k^{-1}(z;\varkappa)={\bf 1}+\BigO{\varkappa^{-1}}$ as $\varkappa\rightarrow\infty$. \end{enumerate} \end{lemma}
\begin{proof} We prove this statement assuming that $(b_k-\delta,b_k)\subset E$, i.e. using only \eqref{ParamIeLeft}. The proof for \eqref{ParamIeRight} will be similar with the use of Remark \ref{rem-sym}. From the jump properties \eqref{gamma jump}, \eqref{gJJump}, \eqref{gEJump}, \eqref{jumpPsi}, it is easily verified that $\mathcal{P}_{k}(z;\varkappa)$ has the exact same jumps as $Z(z;\varkappa)$ for $z\in\mathcal{D}_{b_k}$. \\
To prove the second requirement we notice that since $\Gamma_4(z;\lambda)$ is an $L^2_{loc}$ matrix valued function provided that $\lambda\notin[-1,1]$, $\mathcal{P}_{k}(z;\varkappa)$ will also be $L^2_{loc}$ provided that $\Psi(z;\varkappa)\Psi^{-1}_4(\zeta_k;\varkappa)$ is bounded on ${\mathcal{D}}_{b_k}$. Note that, according to \eqref{tgg-loc}, $\tilde{\mathfrak g}(z;\varkappa)-\tilde{\mathfrak g}_4(\zeta_k;\varkappa)$ does not have logarithmic singularity at $z=b_k$, so that $e^{\tilde{\mathfrak g}(z;\varkappa)-\tilde{\mathfrak g}_4(\zeta_k;\varkappa)}$ is bounded in a neighborhood of $b_k$ (recall that the $\tilde{\mathfrak g}_4$ function is the $\tilde{\mathfrak g}$ function with one double point $b_1=0$ and $[-a,0]=E$, $[0,a]=J$, see \eqref{g4}). Taking into the account the fact that $\tilde\Psi^{\pm 1}$ from \eqref{Psi_tilde_1} is bounded in ${\mathcal{D}}_{b_k}$, we obtain \begin{equation}
\Psi(z;\varkappa)\Psi^{-1}_4(\zeta_k;\varkappa)= e^{\tilde{\mathfrak g}(\infty;\varkappa)\sigma_3} \tilde \Psi(z;\varkappa) e^{-(\tilde{\mathfrak g}(z;\varkappa)-\tilde{\mathfrak g}_4(\zeta_k;\varkappa))\sigma_3}\left(\frac{z+a}{z-a}\right)^{-\frac{\sigma_1}{4}}e^{-\tilde{\mathfrak g}_4(\infty;\varkappa)\sigma_3}, \end{equation} which is bounded in ${\mathcal{D}}_{b_k}$. \\
We prove the third requirement in regions $II_{k\pm}$ only, as the proof is similar for the other regions. According to Theorem \ref{GammaAsmptotics}, \begin{equation}
\Psi^{-1}_4(\zeta_k;\varkappa){\Gamma}_4(\zeta_k;\varkappa)e^{\varkappa{\mathfrak g}_4(\zeta_k)\sigma_3}={\bf 1} + \BigO{\varkappa^{-1}}~~~~~~~~~~~~{\rm as}~~~\varkappa\rightarrow\infty, \end{equation} and thus \begin{equation}\label{P Psi inv asymp}
\mathcal {P}_k(z;\varkappa)\Psi^{-1}(z;\varkappa)=\Psi(z;\varkappa)({\bf 1} + \BigO{\varkappa^{-1}})\Psi^{-1}(z;\varkappa)={\bf 1} + \BigO{\varkappa^{-1}}, \end{equation} since $\Psi^{\pm 1}(z;\varkappa)$ are bounded on $\partial{\mathcal{D}}_{b_k}$ when $\varkappa\in\mathcal{S}_\epsilon$ (see Remark \ref{rem: S_e}). The proof is now complete. \end{proof} We note that the parametrices $\tilde{\mathcal{P}}_j(z;\varkappa)$, $j=1,\ldots,2g+2$, also satisfy properties (1)-(3) of Lemma \ref{lem-param}, see \cite[Prop. 4.14]{BKT16} for more details.
\subsection{Small-norm problem}
\begin{figure}
\caption{The contour $\Sigma_T$ with $g=1$ and $n=2$ double points.}
\label{fig: T jumps}
\end{figure}
The error matrix $T:\mathbb{C}^{2\times2}\setminus\Sigma_T\to\mathbb{C}^{2\times 2}$ is defined as \begin{align}\label{err def}
T(z;\varkappa)=Z(z;\varkappa)\widehat Z^{-1}(z;\varkappa), \end{align} where $\Sigma_T:=\partial\mathcal{D}\cup\left((\partial\mathcal{L}_J^{(\pm)}\cup\mathcal{L}_E^{(\pm)})\setminus\mathcal{D}\right)$ and $\mathcal{D}:=\left(\cup_{j=1}^{2g+2}\mathcal{D}_{a_j}\right)\cup\left(\cup_{k=1}^n\mathcal{D}_{b_k}\right)$, see Figure \ref{fig: T jumps} for an example. {We emphasize that $T(z;\varkappa)$ for $\lambda\in[-1,1]$ (see Figure \ref{fig: kappa} for corresponding $\varkappa$ values) is to be understood in terms of continuation from the upper or lower half plane.} Using the definition of $\hat{Z}$ \eqref{hatZ explicit} and the jump properties of $Z$ \eqref{Z jump}, we see that \begin{align}\label{T jumps}
T(z_+;\varkappa)=T(z_-;\varkappa)\begin{cases}
\Psi(z;\varkappa)\tilde{\mathcal{P}}^{-1}_j(z), & z\in\partial\mathcal{D}_{a_j}, ~ j=1,\ldots,g, \\
\Psi(z;\varkappa)\mathcal{P}^{-1}_k(z), & z\in\partial\mathcal{D}_{b_k}, ~ k=1,\ldots,n, \\
\Psi(z;\varkappa)\begin{bmatrix} 1 & -ie^{-\varkappa(2{\mathfrak g}(z)+1)} \\ 0 & 1 \end{bmatrix}\Psi^{-1}(z;\varkappa), & z\in\partial\mathcal{L}^{(\pm)}_E\setminus\mathcal{D}, \\
\Psi(z;\varkappa)\begin{bmatrix} 1 & 0 \\ ie^{\varkappa(2{\mathfrak g}(z)-1)} & 1 \end{bmatrix}\Psi^{-1}(z;\varkappa), & z\in\partial\mathcal{L}^{(\pm)}_J\setminus\mathcal{D}, \\
{\bf 1}, & z\in \mathring{J}\cup \mathring{E}.
\end{cases} \end{align} While proving Lemma \ref{lem-param}, we obtained the approximation \begin{align*}
\Psi(z;\varkappa)\tilde{\mathcal{P}}^{-1}_j(z)&={\bf 1}+\BigO{\varkappa^{-1}}, \qquad \mbox{uniformly for } z\in\partial\mathcal{D}_{a_j}, \\
\Psi(z;\varkappa)\mathcal{P}^{-1}_k(z)&={\bf 1}+\BigO{\varkappa^{-1}}, \qquad \mbox{uniformly for } z\in\partial\mathcal{D}_{b_k}, \end{align*} as $\varkappa\to\infty$ with $\varkappa\in\mathcal{S}_\epsilon$ for $j=1,\ldots,2g+2$, $k=1,\ldots,n$, see \eqref{P Psi inv asymp}. Also note that $\Psi(z;\varkappa)$ is uniformly bounded for $z\in\partial\mathcal{L}^{(\pm)}_{J,E}\setminus\mathcal{D}$, $\varkappa\in\mathcal{S}_\epsilon$. Thus, as $\varkappa\to\infty$ with $\varkappa\in\mathcal{S}_\epsilon$, we can write \eqref{T jumps} as \begin{align}\label{T jumps asymp}
T(z_+;\varkappa)=T(z_-;\varkappa)\begin{cases}
{\bf 1}, & z\in J\cup E, \\
{\bf 1}+\BigO{\varkappa^{-1}}, & z\in\partial\mathcal{D}, \\
{\bf 1}+\BigO{e^{-c\varkappa}}, & z\in\left(\partial\mathcal{L}_J^{(\pm)}\cup\partial\mathcal{L}_E^{(\pm)}\right)\setminus\mathcal{D},
\end{cases} \end{align} where $c>0$ is sufficiently small. In other words, $T(z;\varkappa)$ solves a small-norm problem, therefore it is well known (see, for example, \cite[p. 1529--1532]{DKMVZ99}) that \begin{align}\label{E uni err}
T(z;\varkappa)={\bf 1}+\BigO{\varkappa^{-1}} \qquad \mbox{as} ~ \varkappa\to\infty \end{align} uniformly for $z$ in compact subsets of $\mathbb{C}\setminus\Sigma_T$, provided $\varkappa\in\mathcal{S}_\epsilon$. We conclude this section with a statement for the small $\lambda$ asymptotics of $\Gamma(z;\lambda)$.
\begin{theorem}\label{thm: gamma asymptotics} Let $\varkappa\to\infty$ with $\varkappa\in\mathcal{S}_\epsilon$. \begin{enumerate}
\item If $z$ is in a compact subset of $\mathbb{C}\setminus(E\cup J)$, then we have the uniform approximation \begin{align}
\Gamma(z;2e^{-\varkappa})=e^{\varkappa {\mathfrak g}(\infty)\sigma_3}\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\Psi(z;\varkappa)e^{-\varkappa {\mathfrak g}(z)\sigma_3}. \end{align} \item If $z_\pm$ is in a compact subset of $\mathring{E}\cup\mathring{J}$, then we have the uniform approximation \begin{multline}
\Gamma(z_\pm;2e^{-\varkappa})=e^{\varkappa {\mathfrak g}(\infty)\sigma_3}\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\Psi(z_\pm;\varkappa) \\
\times\begin{bmatrix} 1 & \mp ie^{-\varkappa(2{\mathfrak g}(z_\pm)+1)}\chi_{J}(z) \\ \pm ie^{\varkappa(2{\mathfrak g}(z_\pm)-1)}\chi_{E}(z) & 1 \end{bmatrix}e^{-\varkappa{\mathfrak g}(z_\pm)\sigma_3}. \end{multline}
\item If $z_\pm\in(b_k-\delta,b_k)\cup(b_k,b_k+\delta)$ and $\delta>0$ is sufficiently small, then we have the uniform approximation \begin{align*}
\Gamma(z_\pm;2e^{-\varkappa})=e^{\varkappa {\mathfrak g}(\infty)\sigma_3}\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\Psi(z_\pm;\varkappa)\begin{cases}
\sigma_2\Psi_4^{-1}(\zeta_{k\pm};\varkappa)\Gamma_4(\zeta_{k\pm};2e^{-\varkappa})\sigma_2, &(b_k-\delta,b_k)\subset J, \\
\Psi_4^{-1}(\zeta_{k\pm};\varkappa)\Gamma_4(\zeta_{k\pm};2e^{-\varkappa}), &(b_k-\delta,b_k)\subset E.
\end{cases} \end{align*}
\end{enumerate} \end{theorem}
\begin{proof} We begin by reversing the transformations of this section \eqref{y-def}, \eqref{mat-Z}, \eqref{err def} to obtain \begin{align}
\Gamma(z;\lambda)=e^{\varkappa{\mathfrak g}(\infty)\sigma_3}T(z;\varkappa)\widehat{Z}(z;\varkappa)\begin{cases}
e^{-\varkappa{\mathfrak g}(z)\sigma_3}, & z ~ \mbox{outside lenses}, \\
\begin{bmatrix} 1 & \mp {i}{ e^{-\varkappa (2{\mathfrak g}(z) +1)}} \\ 0 & 1 \end{bmatrix}e^{-\varkappa{\mathfrak g}(z)\sigma_3}, & z\in\mathcal{L}_J^{(\pm)}, \\
\begin{bmatrix} 1 & 0 \\ \pm {i}{ e^{ \varkappa (2{\mathfrak g}(z) -1)}} & 1 \end{bmatrix}e^{-\varkappa{\mathfrak g}(z)\sigma_3}, & z\in\mathcal{L}_E^{(\pm)}.
\end{cases} \end{align} Suppose $z$ is in a compact subset of $\mathbb{C}\setminus(E\cup J)$. We then construct the disks $\mathcal{D}_{a_j}$, $\mathcal{D}_{b_k}$ and lenses $\mathcal{L}_J^{(\pm)}$, $\mathcal{L}_E^{(\pm)}$ so that $z$ is outside of the disks and lenses. Using the definition of $\widehat{Z}(z;\varkappa)$ \eqref{hatZ explicit} and applying \eqref{E uni err}, we obtain the first result. Now consider $z$ in a compact subset of $\mathring{E}\cup\mathring{J}$. The disks $\mathcal{D}_{a_j}$, $\mathcal{D}_{b_k}$ are chosen so that $z$ lies outside the disks. Again we turn to \eqref{hatZ explicit}, \eqref{E uni err} to obtain the second result. Lastly, let $\delta>0$ be such that $(b_k,b_k+\delta)\subset\mathcal{D}_{b_k}$ for $k=1,\ldots,n$. After applying \eqref{hatZ explicit}, \eqref{ParamIeLeft}, \eqref{E uni err}, we obtain the final result. \end{proof}
We now consider the special scenario where $g=0$ and there are $n\geq1$ double points, i.e. $E\cup J=[a_1,a_2]$. In this setting, the matrix $\Psi(z;\varkappa)$ can be expressed in terms of elementary functions. Explicitly, \begin{align}\label{Psi simple}
\Psi(z;\varkappa)= e^{\tilde{\mathfrak g}(\infty;\varkappa)\sigma_3}\left(\frac{z-a_1}{z-a_2}\right)^{\frac{\sigma_1}4} e^{-\tilde{\mathfrak g}(z;\varkappa)\sigma_3}. \end{align} The following result will be useful later.
\begin{lemma}\label{lemma: gamma no pole}
Let $J\cup E=[a_1,a_2]$ with $n\geq1$ double points, $z\in\mathring{J}\cup\mathring{E}$ and $|\tilde{\varkappa}|$ be sufficiently large so that the results of Theorem \ref{thm: gamma asymptotics} apply. Then, for any $\lambda\in(-1,1)\setminus\{0\}$ such that $|\varkappa|>|\tilde\varkappa|$, $\Gamma(z;\lambda)$ is pole-free (in $\lambda$). \end{lemma}
\begin{proof}
Assume there exists a $\frac{\lambda_0}{2}=e^{-\varkappa_0}$ ($\lambda_0>0$) with $\varkappa_0>|\tilde{\varkappa}|$ such that $\Gamma(z;\lambda_0)$ has a pole. Then, according to \cite[eq. 5.1]{BKT20}, $\Gamma(z;\lambda)$ has the Laurant expansion around $\lambda=\lambda_0$ given by \begin{align}\label{gamma pole}
\Gamma(z;\lambda)=\frac{\Gamma^{0}(z)}{\lambda-\lambda_0}+\Gamma^1(z)+\BigO{\lambda-\lambda_0}, \end{align}
see \cite[Prop. 5.2]{BKT20} for more details on $\Gamma^0(z)$. According to \eqref{gamma pole}, $\Gamma(z;\lambda)$ becomes unbounded as $\lambda\to\lambda_0$. Since $\varkappa_0>|\tilde{\varkappa}|$, Theorem \ref{thm: gamma asymptotics} applies. But after noticing that ${\mathfrak g}(z), \Psi(z;\varkappa), \tilde{\mathfrak g}(z;\varkappa)$ are bounded (see \eqref{gg}, \eqref{tgg}, \eqref{Psi simple}), Theorem \ref{thm: gamma asymptotics} implies that $\Gamma(z;\lambda)$ is bounded as $\lambda\to\lambda_0$, a contradiction. Thus, $\Gamma(z;\lambda)$ is pole-free for $\varkappa>|\tilde{\varkappa}|$ when $\lambda>0$. For $\lambda<0$, we use the symmetry $\Gamma(z;\lambda)=\sigma_3\Gamma(z;-\lambda)\sigma_3$, see Remark \ref{rem-sym}, to obtain the result. \end{proof}
\section{Asymptotics of the jump of the kernel of $\mathscr{R}$}\label{sec:jump}
We consider the case of just one interval $[a_1,a_2]=J\cup E$ with $n$ double points $b_1<b_2<\dots<b_n$ in $(a_1,a_2)$ and $[a_1,b_1]\subset E$. Explicitly, \begin{align*}
E&=\begin{cases}
[a_1,b_1]\cup[b_2,b_3]\cup\cdots\cup[b_n,a_2], &\text{$n$ even}, \\
[a_1,b_1]\cup[b_2,b_3]\cup\cdots\cup[b_{n-1},b_n], &\text{$n$ odd},
\end{cases} \end{align*} and $J=[a_1,a_2]\setminus(E\setminus\{b_1,\ldots,b_n\})$. The objective of this Section is to obtain the large $\varkappa$ asymptotics of the kernels $Q_j$, see Theorem \ref{thm:main} items (3) and (4). This goal is achieved in Proposition \ref{Eprime simple}. The first step in proving this Proposition is to obtain the leading order asymptotics of the jump of the kernel of the resolvent, which was computed in terms of the matrix $\Gamma(z;\lambda)$ in Proposition \ref{prop: DeltaAbsLambdaRJRE ids}. Recall first that the leading order asymptotics of $\Gamma(z;\lambda)$ was obtained in Theorem \ref{thm: gamma asymptotics} and second that since we are using Proposition \ref{prop: DeltaAbsLambdaRJRE ids} we only need to consider $\Im\varkappa\geq0$ (equivalently $\Im\lambda\leq0$). Thus we assume throughout this Section that $\Im\varkappa\geq0$ and $\Im\varkappa=0$ is to be understood as the continuation onto the real-axis from the upper-half plane. An important piece of the leading order asymptotics of $\Gamma(z;\lambda)$ in Theorem \ref{thm: gamma asymptotics} is the solution of the model RHP \ref{modelRHP} $\Psi(z;\varkappa)$. Due to the simplicity of our intervals $J, E$, we can express $\Psi(z;\varkappa)$ in terms of elementary functions. We begin by verifying that the solution of RHP \ref{modelRHPtilde} is $\tilde \Psi(z;\varkappa)=\tilde{\Psi}(z)=\left(\frac{z-a_1}{z-a_2}\right)^{\frac{\sigma_1}{4}}$. On the other hand, similarly to Section \ref{sect-modsol2}, we can write \begin{equation}\label{Psi_S}
\Psi(z;\varkappa)=\Psi(z)= e^{\tilde{\mathfrak g}(\infty)\sigma_3}\left(\frac{z-a_1}{z-a_2}\right)^{\frac{\sigma_1}4} e^{-\tilde{\mathfrak g}(z)\sigma_3}=S(z)\left(\frac{z-a_1}{(z-a_2)^{(-1)^n}}\right)^{\frac{\sigma_1}{4}}\tilde{B}_n(z)^{\sigma_1}, \end{equation} where \begin{align}\label{tilde Bn}
\tilde{B}_n:\begin{cases}
\mathbb{C}\setminus \left((-\infty,a_1)\cup E\right)\to\mathbb{C}, & n ~ \text{odd}, \\
\mathbb{C}\setminus J\to\mathbb{C}, & n ~ \text{even},
\end{cases} \hspace{0.75cm} \tilde{B}_n(z):=\prod_{j=1}^n\left(z-b_j\right)^\frac{(-1)^j}{2}, \end{align} the branches for each root in $\tilde{B}_n(z)$ and $\left(\frac{z-a_1}{(z-a_2)^{(-1)^n}}\right)^{\frac{1}{4}}$ are $(-\infty,c_j)$ with positive orientation for each $c_j\in\{a_1,a_2,b_1,\ldots,b_n\}$ and \begin{equation}\label{S(z)} S(z)={\bf 1}+\sum_{j=1}^n \frac{B_j}{z-b_j}= {\bf 1} +\sum_{j=1}^n (M_j+iN_j)\frac{{\bf 1} +(-1)^j\sigma_1}{z-b_j}. \end{equation}
Here
$M_j, N_j$ are real diagonal matrices and the latter equation follows from the requirement that $\Psi(z)=\BigO{(z-b_j)^{-\frac{1}{2}}}$ near any $b_j$. We can now see that $\Psi(z)$ can be written as the sum of a Schwarz symmetric function and an anti-Schwarz symmetric function as \begin{align}\label{Psi Sch + aSch}
\Psi(z)=\widehat{\Psi}(z)+i\breve{\Psi}(z), \end{align} where $\widehat{\Psi}(z)$, $\breve{\Psi}(z)$ are the Schwarz symmetric functions \begin{align}
\widehat{\Psi}(z)&:=\left({\bf 1}+\sum_{j=1}^n\frac{M_j}{z-b_j}({\bf 1}+(-1)^j\sigma_1)\right)\left(\frac{z-a_1}{(z-a_2)^{(-1)^n}}\right)^{\frac{\sigma_1}{4}}\tilde{B}_n(z)^{\sigma_1}, \label{wh Psi} \\
\breve{\Psi}(z)&:=\sum_{j=1}^n\frac{N_j}{z-b_j}({\bf 1}+(-1)^j\sigma_1)\left(\frac{z-a_1}{(z-a_2)^{(-1)^n}}\right)^{\frac{\sigma_1}{4}}\tilde{B}_n(z)^{\sigma_1}. \label{breve Psi} \end{align} We are now prepared to extract the leading order asymptotics of $\Delta_{\lambda}R_{EE}$, $\Delta_{\lambda}R_{EJ}$, $\Delta_{\lambda}R_{JE}$, and $\Delta_{\lambda}R_{JJ}$.
\begin{proposition}\label{prop: Delta R asymp}
Let $\lambda\in [-1,1]\setminus\Sigma$, $M_j$ denote the $j$th column of the matrix $M$, and, with abuse of notation, $\varkappa=\varkappa(|\lambda|)$. As $\lambda\to0$, \begin{enumerate}
\item if $x,y$ are in a compact subset of $\mathring{E}$, we have the uniform approximation
\begin{align}\label{Delta REE asymp}
\Delta_{\lambda}R_{EE}(x,y;\lambda)=\tilde{R}_{EE}(x,y;\lambda)+\BigO{\varkappa^{-1}},
\end{align}
where
\begin{multline}\label{tREE}
\tilde{R}_{EE}(x,y;\lambda)= \\
\frac{\begin{vmatrix} \Re[e^{-i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{1}(y)] & \Re[e^{-i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{1}(x)] \end{vmatrix}_+-\begin{vmatrix} \Re[e^{-i\varkappa\Im{\mathfrak g}(x)}\widehat{\Psi}_{1}(x)] & \Re[e^{-i\varkappa\Im{\mathfrak g}(y)}\breve{\Psi}_{1}(y)] \end{vmatrix}_+}{\mathrm{sgn}(\lambda)\frac{i\pi}{4}(x-y)},
\end{multline}
and we understand that $x,y$ are taken on the upper shore of $\mathring{E}$.
\item if $x,y$ are in a compact subset of $\mathring{E}, \mathring{J}$, respectively, we have the uniform approximation
\begin{align}\label{Delta REJ asymp}
\Delta_{\lambda}R_{EJ}(x,y;\lambda)=\tilde{R}_{EJ}(x,y;\lambda)+\BigO{\varkappa^{-1}},
\end{align}
where
\begin{multline}\label{tREJ}
\tilde{R}_{EJ}(x,y;\lambda)= \\
\frac{\begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{2}(y)] & \Re[e^{-i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{1}(x)] \end{vmatrix}_+-\begin{vmatrix} \Re[e^{-i\varkappa\Im{\mathfrak g}(x)}\widehat{\Psi}_{1}(x)] & \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\breve{\Psi}_{2}(y)] \end{vmatrix}_+}{\frac{i\pi}{4}(x-y)},
\end{multline}
and we understand that $x,y$ are taken on the upper shore of $\mathring{E}, \mathring{J}$, respectively.
\item if $x,y$ are in a compact subset of $\mathring{J}, \mathring{E}$, respectively, we have the uniform approximation
\begin{align}\label{Delta RJE asymp}
\Delta_{\lambda}R_{JE}(x,y;\lambda)=\tilde{R}_{JE}(x,y;\lambda)+\BigO{\varkappa^{-1}},
\end{align}
where
\begin{multline}\label{tRJE}
\tilde{R}_{JE}(x,y;\lambda)= \\
\frac{\begin{vmatrix} \Re[e^{-i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{1}(y)] & \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{2}(x)] \end{vmatrix}_+-\begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\widehat{\Psi}_{2}(x)] & \Re[e^{-i\varkappa\Im{\mathfrak g}(y)}\breve{\Psi}_{1}(y)] \end{vmatrix}_+}{\frac{i\pi}{4}(x-y)},
\end{multline}
and we understand that $x,y$ are taken on the upper shore of $\mathring{J}, \mathring{E}$, respectively.
\item if $x,y$ are in a compact subset of $\mathring{J}$, we have the uniform approximation
\begin{align}\label{Delta RJJ asymp}
\Delta_{\lambda}R_{JJ}(x,y;\lambda)=\tilde{R}_{JJ}(x,y;\lambda)+\BigO{\varkappa^{-1}},
\end{align}
where
\begin{multline}\label{tRJJ}
\tilde{R}_{JJ}(x,y;\lambda)= \\
\frac{\begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{2}(y)] & \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{2}(x)] \end{vmatrix}_+-\begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\widehat{\Psi}_{2}(x)] & \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\breve{\Psi}_{2}(y)] \end{vmatrix}_+}{\mathrm{sgn}(\lambda)\frac{i\pi}{4}(x-y)},
\end{multline}
and we understand that $x,y$ are taken on the upper shore of $\mathring{J}$. \end{enumerate} \end{proposition}
\begin{proof} We will prove \eqref{Delta REJ asymp}, \eqref{tREJ} since the proof of the other identities similar. Using \eqref{jump-num} and Proposition \ref{prop: DeltaAbsLambdaRJRE ids}, we have \begin{align}\label{in proof: REJ}
\Delta_{\lambda}R_{EJ}(x,y;\lambda)=\frac{2i\left(\det\left[\Re\Gamma_2(y;|\lambda|), \Im\Gamma_1(x;|\lambda|_-)\right]-\det\left[\Re\Gamma_1(x;|\lambda|), \Im\Gamma_2(y;|\lambda|_-)\right]\right)}{-\pi|\lambda|(x-y)}. \end{align}
From Theorem \ref{thm: gamma asymptotics} and the jump properties of ${\mathfrak g}(z)$, $\Psi(z)$, we have (taking $\varkappa=\varkappa(|\lambda|)$) \begin{align}\label{G2G1 asymp} \begin{split}
\Gamma_2(z;|\lambda|_-)&=e^{\varkappa{\mathfrak g}(\infty)\sigma_3}\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\begin{bmatrix} e^{\varkappa{\mathfrak g}(z_+)}\Psi_{2+}(z)+e^{\varkappa{\mathfrak g}(z_-)}\Psi_{2-}(z) \end{bmatrix}, \qquad z\in\mathring{J}, \\
\Gamma_1(z;|\lambda|_-)&=e^{\varkappa{\mathfrak g}(\infty)\sigma_3}\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\begin{bmatrix} e^{-\varkappa{\mathfrak g}(z_+)}\Psi_{1+}(z) + e^{-\varkappa{\mathfrak g}(z_-)}\Psi_{1-}(z) \end{bmatrix}, \qquad z\in\mathring{E}. \end{split} \end{align} Recall that if $f(z)$ is a Schwarz symmetric function with a jump on an interval $I\subset\mathbb{R}$, then $f_+(z)+f_-(z)=2\Re[f_+(z)]=2\Re[f_-(z)]$ for $z\in I$. Using \eqref{Psi Sch + aSch}, we have \begin{align}\label{ReIm parts}
e^{(-1)^j\varkappa{\mathfrak g}(z_+)}\Psi_{j+}(z)+e^{(-1)^j\varkappa{\mathfrak g}(z_-)}\Psi_{j-}(z)=2\Re\left[e^{(-1)^j\varkappa{\mathfrak g}(z_+)}\widehat{\Psi}_{j_+}(z)\right]+2i\Re\left[e^{(-1)^j\varkappa{\mathfrak g}(z_+)}\breve{\Psi}_{j+}(z)\right]. \end{align} The results now follow by using \eqref{G2G1 asymp}, \eqref{ReIm parts} in \eqref{in proof: REJ} and using the fact that $\Re[{\mathfrak g}(z_\pm)]=\frac{1}{2}$ for $z\in\mathring{E}$ and $\Re[{\mathfrak g}(z_\pm)]=-\frac{1}{2}$ for $z\in \mathring{J}$, see Proposition \ref{prop: g-function}. \end{proof}
To simplify the leading order kernels obtained in Proposition \ref{prop: Delta R asymp}, we need to compute the matrices $M_j, N_j$ appearing in \eqref{S(z)}. This is achieved by understanding the behavior of $\tilde{\mathfrak g}(z)$ as $z\to b_j$, $j=1,\ldots,n$ and as $z\to\infty$. Recall from \eqref{tgg} that $\tilde{\mathfrak g}(z)$ (for $\Im\varkappa\geq0$) has the integral representation \begin{align*}
\tilde{\mathfrak g}(z)=\frac{R(z)}{2\pi i}\int_{J}\frac{ -i\pi}{(\zeta-z)R_+(\zeta)}d\zeta. \end{align*} We can evaluate this integral explicitly. Define the function $s:\mathbb{C}\setminus(-\infty,a_2]\times(a_1,a_2)\to\mathbb{C}$ as \begin{align}\label{szb}
s(z,b):=i\sqrt{(z-a_1)(a_2-b)}+\sqrt{(z-a_2)(b-a_1)}, \end{align} where $\arg(z-a_j)=0$ for $z-a_j>0$. For $z\in(-\infty,a_2)$, \begin{align*}
s(z_+,b)=\begin{cases}
\frac{(b-z)(a_2-a_1)}{s(z_-,b)}, & z\in(a_1,a_2), \\
-s(z_-,b), & z\in(-\infty,a_1).
\end{cases} \end{align*} We also verify that $\tilde{B}_n(z)$, defined in \eqref{tilde Bn}, has the jumps \begin{align*}
\tilde{B}_{n}(z_+)=-\tilde{B}_{n}(z_-), \qquad z\in\begin{cases}
(-\infty,a_1)\cup \mathring{E}, & n ~ \text{odd}, \\
\mathring{J}, & n ~ \text{even}.
\end{cases} \end{align*} From the jump conditions of $s(z,b)$ and $\tilde{B}_n(z)$, we have the identity \begin{align}\label{tgg explicit}
\tilde{{\mathfrak g}}(z)=\ln\left((a_2-a_1)^{\frac{(-1)^n-1}{4}}\tilde{B}_n(z)\prod_{k=1}^{n} s(z,b_k)^{(-1)^{k+1}}\right), \end{align} where the logarithm $\ln(\cdot)$ has the branch $(-\infty,0)$ with positive orientation, because the right hand side of \eqref{tgg explicit} solves the RHP for $\tilde{\mathfrak g}(z)$, see \eqref{tggJumps1} and the surrounding text. Alternatively, \eqref{tgg explicit} can be obtained by applying \cite[eq. 2.266]{GRtable} to \eqref{tgg}. It is convenient to introduce the angles $\nu_k$, $k=1,\ldots,n$, defined as the acute angle adjacent to the side $\sqrt{b_k-a_1}$ in the right triangle with hypotenuse $\sqrt{a_2-a_1}$ and sides $\sqrt{b_k-a_1}, \sqrt{a_2-b_k}$, see Figure \ref{fig: nu angle}. \begin{figure}
\caption{The angle $\nu_k$.}
\label{fig: nu angle}
\end{figure} With this notation, we can see that for $z\in(a_1,a_2)$, \begin{align*}
s(z_\pm,b_k)=i(a_2-a_1)\sin(\nu_k\pm\nu(z)), \end{align*} where $\nu(z)=\sin^{-1}\left(\frac{\sqrt{a_2-z}}{\sqrt{a_2-a_1}}\right)$ and $\nu(b_k)=\nu_k$. As $z\to b_j$ on the upper shore and interior of $J$, \begin{align}
&\tilde{B}_n(z)=i^{\frac{(-1)^n+1}{2}}\hat{k}_j|z-b_j|^{\frac{(-1)^j}{2}}+\BigO{(z-b_j)^{\frac{(-1)^j}{2}+1}}, \qquad \hat{k}_j:=\prod_{\substack{s=1 \\ s\neq j}}^n|b_j-b_s|^{\frac{(-1)^s}{2}}, \label{hat kj} \\
&\prod_{k=1}^{n} s(z,b_k)^{(-1)^{k+1}}=(i(a_2-a_1))^\frac{1-(-1)^n}{2}\hat{s}_j+o(1), \qquad \hat{s}_j:=\prod_{k=1}^{n} \sin(\nu_j+\nu_k)^{(-1)^{k+1}}, \label{hat sj} \end{align} and thus \begin{align}\label{etgg}
e^{\tilde{{\mathfrak g}}(z)}=i(a_2-a_1)^\frac{1-(-1)^n}{4}\hat{s}_j\hat{k}_j|z-b_j|^{\frac{(-1)^j}{2}}+\BigO{(z-b_j)^{\frac{(-1)^j}{2}+1}}. \end{align} To understand the behavior of $\tilde{\mathfrak g}(z)$ as $z\to\infty$, we first notice that as $z\to+\infty$, \begin{align*}
\frac{s(z,b_k)}{\sqrt{z-b_k}}=\sqrt{a_2-a_1}e^{i\nu_k}\left(1+\frac{i(a_2-a_1)\sin(2\nu_k)}{4z}+\BigO{z^{-2}}\right), \end{align*} for $k=1,\ldots,n$. We see that \begin{align}\label{tgg inf}
\tilde{\mathfrak g}(\infty)=i\sum_{k=1}^n(-1)^{k+1}\nu_k, \end{align} and as $z\to\infty$, \begin{align}
e^{\tilde{{\mathfrak g}}(z)}=e^{\tilde{{\mathfrak g}}(\infty)}\left(1+\frac{i(a_2-a_1)}{4z}\sum_{k=1}^n(-1)^{k+1}\sin(2\nu_k)+\BigO{z^{-2}}\right). \end{align} We now state a preparatory Lemma.
\begin{lemma}\label{lem-s1} For any $M,N\in {\mathbb C}\setminus\{0\}$ and $a\in{\mathbb C}$ we have \begin{equation} M^{\sigma_1}e^{a\sigma_3}N^{\sigma_1}=(MN)^{\sigma_1}\cosh a +\sigma_3\left(\frac NM\right)^{\sigma_1}\sinh a. \end{equation} \end{lemma}
\begin{proof} Using the identities \begin{align*}
\beta^{\sigma_1}=\frac{1}{2}\begin{bmatrix} \beta+\beta^{-1} & \beta-\beta^{-1} \\ \beta-\beta^{-1} & \beta+\beta^{-1} \end{bmatrix}=\cosh(\ln\beta){\bf 1}+\sinh(\ln\beta)\sigma_1, \qquad \beta\in\mathbb{C}\setminus\{0\} \end{align*} and \begin{align*}
2\cosh(a)\cosh(b)&=\cosh(a+b)+\cosh(a-b), \\
2\sinh(a)\sinh(b)&=\cosh(a+b)-\cosh(a-b), \\
2\cosh(a)\sinh(b)&=\sinh(a+b)-\sinh(a-b), \end{align*} the Lemma follows from a straightforward calculation.
\end{proof}
\begin{lemma}\label{lem-resM} The matrices $M_j, N_j$ of \eqref{S(z)} have the form $M_{j}=m_j{\bf 1}$ and $N_{j}=n_j\sigma_3$ for $j=1,\ldots,n$, where \begin{equation}\label{m_n} m_j=k_j\begin{cases} -\sin(\nu_j-\alpha), & j ~ \text{odd}, \\ \cos(\nu_j+\alpha), & j ~ \text{even}, \end{cases} \hspace{1cm} n_j=k_j\begin{cases} -\cos(\nu_j-\alpha), & j ~ \text{odd}, \\ \sin(\nu_j+\alpha), & j ~ \text{even}, \end{cases} \end{equation} $\alpha=\Im\tilde{\mathfrak g}(\infty)$ and \begin{align}\label{k_j}
k_j=\begin{cases}
\frac{1}{2}(a_2-a_1)\cos(\nu_j)(\sin\nu_j)^{\frac{1-(-1)^n}{2}}\prod_{\substack{l=1 \\ l\neq j}}^n\left(\sin|\nu_l-\nu_j|\right)^{(-1)^{l}}, & j ~ \text{odd}, \\
\frac{1}{2}(a_2-a_1)(\sin\nu_j)^{\frac{1+(-1)^n}{2}}\prod_{\substack{l=1 \\ l\neq j}}^n\left(\sin|\nu_l-\nu_j|\right)^{(-1)^{l+1}}, & j ~ \text{even}.
\end{cases} \end{align} \end{lemma}
\begin{proof} According to \eqref{Psi_S}, \begin{align}\label{M0} S(z)= e^{\tilde{\mathfrak g}(\infty)\sigma_3}A^{\frac{\sigma_1}{4}}e^{-\tilde{\mathfrak g}(z)\sigma_3}B^{-\frac{\sigma_1}{4}}, \end{align} where (recall $\tilde{B}_n(z)$ was defined in \eqref{tilde Bn}) \begin{align*}
A=A(z)=\frac{z-a_1}{z-a_2}, ~~~ B=B_n(z)=\frac{(z-a_1)}{(z-a_2)^{(-1)^n}}\tilde{B}_n(z)^4. \end{align*} Then, using Lemma \ref{lem-s1}, we have \begin{equation}\label{M1} M(z)= e^{\tilde{\mathfrak g}(\infty)\sigma_3}\left[\left(\frac{A}{B}\right)^{\frac{\sigma_1}{4}}\cosh \tilde{\mathfrak g}(z)-\sigma_3\left(\frac{1}{AB}\right)^{\frac{\sigma_1}{4}}\sinh \tilde{\mathfrak g}(z)\right]. \end{equation} Using \eqref{etgg} we obtain \begin{equation}\label{chsh} \begin{split}
\cosh \tilde{\mathfrak g}(z_+)= \frac{i}{2}|z-b_j|^{-\frac{1}{2}}(-1)^{j+1}(\hat{k}_j\hat{s}_j(a_2-a_1))^\frac{1-(-1)^n}{4})^{(-1)^{j+1}}+\BigO{1},\cr
\sinh \tilde{\mathfrak g}(z_+)=\frac{i}{2}|z-b_j|^{-\frac{1}{2}}(\hat{k}_j\hat{s}_j(a_2-a_1))^\frac{1-(-1)^n}{4})^{(-1)^{j+1}}+\BigO{1},
\end{split} \end{equation} as $z\rightarrow b_{j}$ on the upper shore and interior of $J$, $j=1,\dots,n$. It is straightforward to verify that $\arg \left(\frac AB\right)^\frac 14 =-\frac \pi 2$ and $\arg \left(\frac 1{AB}\right)^\frac 14 =0$ on the upper shore and interior of $J$ so that \begin{equation}\label{A_B}
\left(\frac AB\right)^\frac 14 =-i\left|\frac AB\right|^\frac 14,~~~~
\left(\frac 1{AB}\right)^\frac 14=\left|\frac 1{AB}\right|^\frac 14. \end{equation} Moreover, according to \eqref{M0}, \eqref{A_B}, \begin{eqnarray}\label{A_B1} \begin{split}
&&2\left(\frac AB\right)^{\frac{\sigma_1}{4}} =(-1)^{j+1}i\left(\hat{k}_j\right)^{(-1)^{j+1}}\tilde{A}_{j,1}({\bf 1}+(-1)^j\sigma_1)|z-b_{j}|^{-\frac{1}{2}}+\BigO{1},\cr
&&2\left(\frac 1{AB}\right)^{\frac{\sigma_1}{4}}=\left(\hat{k}_j\right)^{(-1)^{j+1}}\tilde{A}_{j,2}({\bf 1}+(-1)^j\sigma_1)|z-b_{j}|^{-\frac{1}{2}}+\BigO{1}, \end{split}
\end{eqnarray} where constants $\hat{k}_j>0$ are defined in \eqref{hat kj} and \begin{align}
\tilde{A}_{j,1}&=\begin{cases}
1, & n ~ \text{even}, \\
(a_2-b_j)^\frac{(-1)^{j+1}}{2}, & n ~ \text{odd},
\end{cases} \\
\tilde{A}_{j,2}&=\begin{cases}
\left(\frac{b_j-a_1}{a_2-b_j}\right)^\frac{(-1)^j}{2}, & n ~ \text{even}, \\
(b_j-a_1)^\frac{(-1)^{j+1}}{2}, & n ~ \text{odd}.
\end{cases} \end{align}
Substituting \eqref{chsh}-\eqref{A_B1} into \eqref{M1}, letting $\alpha=\Im[\tilde{\mathfrak g}(\infty)]$, and comparing the $\BigO{(z-b_j)^{-1}}$ term (leading order obtained by substituting \eqref{chsh}-\eqref{A_B1} into \eqref{M1} is of the form $\frac{c}{|z-b_j|}$, be careful with absolute value) with that of \eqref{S(z)}, we obtain \begin{align*} M_j+iN_j&=\frac{(-1)^j}{4}(\hat{k}_j^2\hat{s}_j(a_2-a_1)^\frac{1-(-1)^n}{4})^{(-1)^{j+1}}e^{\tilde{\mathfrak g}(\infty)\sigma_3}\left({\bf 1}\tilde{A}_{j,1} + i\sigma_3\tilde{A}_{j,2}\right) \\ &=\frac{(-1)^j}{4}(\hat{k}_j^2\hat{s}_j(a_2-a_1)^\frac{1-(-1)^n}{4})^{(-1)^{j+1}}\left[\left(\tilde{A}_{j,1}\cos\alpha-\tilde{A}_{j,2}\sin\alpha\right){\bf 1}+\left(\tilde{A}_{j,2}\cos\alpha +\tilde{A}_{j,1}\sin\alpha\right)i\sigma_3\right]. \end{align*} Using the fact that $\sqrt{b_j-a_1}=\sqrt{a_2-a_1}\cos(\nu_j)$ and $\sqrt{a_2-b_j}=\sqrt{a_2-a_1}\sin(\nu_j)$, we have \begin{align}
D_j=k_j\begin{cases}
-\sin(\nu_j-\alpha){\bf 1}-\cos(\nu_j-\alpha)i\sigma_3, & j ~ \text{odd}, \\
\cos(\nu_j+\alpha){\bf 1}+\sin(\nu_j+\alpha)i\sigma_3, & j ~ \text{even},
\end{cases} \end{align}
where \begin{equation} k_j=\begin{cases}
\frac{\hat{k}_j^2\hat{s}_j}{4}(a_2-a_1)^\frac{1-(-1)^n}{2}(\sin\nu_j)^\frac{1+(-1)^n}{-2}, & j ~ \text{odd}, \\
\frac{\sec\nu_j}{4\hat{k}_j^2\hat{s}_j}(a_2-a_1)^\frac{(-1)^n-1}{2}(\sin\nu_j)^\frac{(-1)^n-1}{2}, & j ~ \text{even}. \end{cases}
\end{equation} These equations imply \eqref{m_n}. The relation $b_j-b_l=(a_2-a_1)\sin(\nu_l+\nu_j)\sin(\nu_l-\nu_k)$ is used to complete the proof. \end{proof}
We now show that $\tilde{R}_{JJ}, \tilde{R}_{JE}, \tilde{R}_{EJ}, \tilde{R}_{EE}$ can be expressed as quadratic forms. Let $N\in\mathbb{N}$ be such that $n=2N$ if $n$ is even and $n=2N+1$ if $n$ is odd and \begin{align*}
\tilde{N}:=\begin{cases}
N, & n ~ \text{even}, \\
N+1, & n ~ \text{odd}.
\end{cases} \end{align*} We define the $n\times n$ block diagonal matrix \begin{align}\label{M-def}
\mathbb{M}:=\begin{bmatrix} \mathbb{M}_1 & 0 \\ 0 & \mathbb{M}_2 \end{bmatrix}, \end{align} where $\mathbb{M}_1, \mathbb{M}_2$ are symmetric blocks of size $\tilde{N}\times\tilde{N}$, $N\times N$, respectively, with diagonal entries \begin{align}
\left[\mathbb{M}_1\right]_{ll}&=k_{2l-1}\cos(\nu_{2l-1}-\alpha)-2k_{2l-1}\sum_{\substack{j=1 \\ j\neq l}}^{\tilde{N}}\frac{k_{2j-1}\sin(\nu_{2l-1}-\nu_{2j-1})}{b_{2j-1}-b_{2l-1}}, \label{M1 diag} \\
\left[\mathbb{M}_2\right]_{ll}&=k_{2l}\sin(\nu_{2l}+\alpha)-2k_{2l}\sum_{\substack{j=1 \\ j\neq l}}^{N}\frac{k_{2j}\sin(\nu_{2l}-\nu_{2j})}{b_{2j}-b_{2l}}, \label{M2 diag} \end{align} and off-diagonal entries \begin{align}\label{offdiag-ent}
\left[\mathbb{M}_1\right]_{jl}=\frac{2k_{2j-1}k_{2l-1}\sin(\nu_{2j-1}-\nu_{2l-1})}{b_{2l-1}-b_{2j-1}}, \qquad \left[\mathbb{M}_2\right]_{jl}=\frac{2k_{2j}k_{2l}\sin(\nu_{2j}-\nu_{2l})}{b_{2l}-b_{2j}}, \end{align} where $l\neq j$. We show that $\mathbb{M}$ is positive definite in Proposition \ref{prop: M pos def}.
\begin{theorem}\label{theo-dim_n}
Let $\lambda\in[-1,1]\setminus\{0\}$ and, with abuse of notation, $\varkappa=\varkappa(|\lambda|)$. We define $\vec{f}, \vec{h}:(\mathring{J}\cup\mathring{E})\times[-1,1]\setminus\{0\}\to\mathbb{R}^n$ as \begin{align}
\vec{f}^{\hspace{0.4mm}t}(x;\varkappa)&=\begin{bmatrix} \frac{\phi_-(x;\varkappa)}{x-b_1} & \cdots & \frac{\phi_-(x;\varkappa)}{x-b_{2\tilde{N}-1}} & \frac{\phi_+(x;\varkappa)}{x-b_2} & \cdots & \frac{\phi_+(x;\varkappa)}{x-b_{2N}} \end{bmatrix}, \\
\vec{h}^{\hspace{0.4mm}t}(x;\varkappa)&=\begin{bmatrix} \frac{\tilde{\phi}_-(x;\varkappa)}{x-b_1} & \cdots & \frac{\tilde{\phi}_-(x;\varkappa)}{x-b_{2\tilde{N}-1}} & \frac{\tilde{\phi}_+(x;\varkappa)}{x-b_2} & \cdots & \frac{\tilde{\phi}_+(x;\varkappa)}{x-b_{2N}} \end{bmatrix}, \end{align} where \begin{align}\label{phi pm}
\phi_{\pm}(x;\varkappa)=\Re\left(e^{i\varkappa \Im{\mathfrak g}(x_+)}B_{n}^{\pm\frac 14}(x_+)\right), ~~~ \tilde{\phi}_\pm(x;\varkappa)=\pm\Re\left(e^{-i\varkappa\Im{\mathfrak g}(x_+)}B_{n}^{\pm\frac{1}{4}}(x_+)\right). \end{align} We mention that $\pm$ occurring in $\phi_\pm$, $\tilde{\phi}_\pm$ is convenient notation and should not be interpreted as a boundary value. \begin{enumerate}
\item For $x,y\in \mathring{E}$,
\begin{align}\label{REE quad-form}
\tilde{R}_{EE}(x,y;\lambda)=\mathrm{sgn}(\lambda)\frac{4}{i\pi}\vec{h}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{h}(x;\varkappa).
\end{align}
\item For $x\in\mathring{E}$ and $y\in\mathring{J}$,
\begin{align}\label{REJ quad-form}
\tilde{R}_{EJ}(x,y;\lambda)=\frac{4}{i\pi}\vec{h}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{f}(x;\varkappa).
\end{align}
\item For $x\in\mathring{J}$ and $y\in\mathring{E}$,
\begin{align}\label{RJE quad-form}
\tilde{R}_{JE}(x,y;\lambda)=\frac{4}{i\pi}\vec{f}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{h}(x;\varkappa).
\end{align}
\item For $x,y\in \mathring{J}$,
\begin{equation}\label{RJJ quad-form}
\tilde{R}_{JJ}(x,y;\lambda) = \mathrm{sgn}(\lambda)\frac{4}{i\pi}\vec{f}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{f}(x;\varkappa).
\end{equation} \end{enumerate} \end{theorem}
\begin{proof} We assume all multi-valued functions are evaluated on the upper shore of $\mathring{J}\cup \mathring{E}$. We prove this Theorem by evaluating the determinants in \eqref{tREE}, \eqref{tREJ}, \eqref{tRJE}, \eqref{tRJJ}. Taking $x,y\in \mathring{J}$ and using \eqref{wh Psi}, \eqref{breve Psi} we have \begin{align}
\Re[e^{i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{2}(y)]&=\frac{1}{2}\phi_+(y;\varkappa)\begin{bmatrix} 1 \\ 1 \end{bmatrix}+\frac{1}{2}\phi_-(y;\varkappa)\begin{bmatrix} -1 \\ 1 \end{bmatrix}+\sum_{j=1}^n\frac{m_j}{y-b_j}\phi_{(-1)^j}(y;\varkappa)\begin{bmatrix} (-1)^j \\ 1 \end{bmatrix} \label{Re hat Psi2} \\
\Re[e^{i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{2}(x)]&=\sum_{j=1}^n\frac{n_j}{x-b_j}\phi_{(-1)^j}(x;\varkappa)\begin{bmatrix} (-1)^j \\ -1 \end{bmatrix}, \label{Re tilde Psi2} \end{align}
where $\phi_{(-1)^j}=\phi_+$ for $j$ even and $\phi_{(-1)^j}=\phi_-$ for $j$ odd. Now evaluating the first determinant in \eqref{tRJJ}, we find \begin{multline} \label{1st_det} \begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{2}(y)] & \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{2}(x)] \end{vmatrix}=\sum_{j=1}^n\frac{n_j\phi_{(-1)^j}(y;\varkappa)\phi_{(-1)^j}(x;\varkappa)}{(-1)^{j+1}(x-b_j)} \\ +2\sum_{\substack{j,l=1 \\ j=l\mod2}}^{n}\frac{m_jn_l\phi_{(-1)^j}(y;\varkappa)\phi_{(-1)^j}(x;\varkappa)}{(-1)^{j+1}(y-b_j)(x-b_l)}. \end{multline} Using \eqref{1st_det}, the numerator of \eqref{tRJJ} is \begin{multline}\label{det-double} \begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\widehat{\Psi}_{2}(y)] & \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_{2}(x)] \end{vmatrix}-\begin{vmatrix} \Re[e^{i\varkappa\Im{\mathfrak g}(x)}\widehat{\Psi}_{2}(x)] & \Re[e^{i\varkappa\Im{\mathfrak g}(y)}\breve{\Psi}_{2}(y)] \end{vmatrix}= \\ (x-y)\Bigg(\sum_{j=1}^n\frac{n_j\phi_{(-1)^j}(y;\varkappa)\phi_{(-1)^j}(x;\varkappa)}{(-1)^{j}(y-b_j)(x-b_j)}-2\sum_{\substack{j,l=1,j<l \\ j=l {\,\hbox{mod}\, 2}}}^n k_jk_l\frac{\sin(\nu_j-\nu_l)(b_l-b_j)\phi_{(-1)^j}(y;\varkappa)\phi_{(-1)^j}(x;\varkappa)}{(y-b_j)(y-b_l)(x-b_j)(x-b_l)}\Bigg), \end{multline} where we have used $n_{l}m_j-n_jm_{l}=(-1)^{j+1}k_jk_l\sin(\nu_j-\nu_l)$, where $l>j$. Now using \begin{align*}
\frac{b_l-b_j}{(y-b_j)(x-b_l)(y-b_l)(x-b_j)}=\frac{1}{b_l-b_j}\left(\frac{1}{y-b_j}-\frac{1}{y-b_l}\right)\left(\frac{1}{x-b_j}-\frac{1}{x-b_l}\right), \end{align*} we see that \eqref{det-double} is equivalent to \begin{multline}\label{det-single}
(x-y)\Bigg[\sum_{j=1}^n\frac{\phi_{(-1)^j}(y;\varkappa)\phi_{(-1)^j}(x;\varkappa)}{(y-b_j)(x-b_j)}\Bigg((-1)^{j}n_j-2k_j\sum_{\substack{l=1,j\neq l \\ l=j\mod2}}^{n}\frac{k_l\sin(\nu_j-\nu_l)}{b_l-b_j}\Bigg) \\
+2\sum_{\substack{j,l=1, j<l \\ j=l\,\hbox{mod}\, 2}}^n k_jk_l\frac{\sin(\nu_j-\nu_l)\phi_{(-1)^j}(y;\varkappa)\phi_{(-1)^j}(x;\varkappa)}{b_l-b_j}\left(\frac{1}{(y-b_j)(x-b_l)}+\frac{1}{(y-b_l)(x-b_j)}\right)\Bigg]. \end{multline} The statement \eqref{RJJ quad-form} follows from \eqref{m_n}, \eqref{tRJJ}, \eqref{det-single}. Now taking $x,y\in\mathring{E}$ and evaluating the determinant in \eqref{tREE} using \eqref{wh Psi}, \eqref{breve Psi}, we see that \begin{align}
\Re[e^{-i\varkappa\Im{\mathfrak g}(y)}\hat{\Psi}_1(y)]&=\frac{1}{2}\tilde{\phi}_+(y;\varkappa)\begin{bmatrix} 1 \\ 1 \end{bmatrix}+\frac{1}{2}\tilde{\phi}_-(y;\varkappa)\begin{bmatrix} -1 \\ 1 \end{bmatrix}+\sum_{j=1}^n\frac{m_j\tilde{\phi}_{(-1)^j}(y;\varkappa)}{y-b_j}\begin{bmatrix} (-1)^j \\ 1 \end{bmatrix}, \label{Re hat Psi1} \\
\Re[e^{-i\varkappa\Im{\mathfrak g}(x)}\breve{\Psi}_1(x)]&=\sum_{j=1}^n\frac{n_j\tilde{\phi}_{(-1)^j}(x;\varkappa)}{x-b_j}\begin{bmatrix} (-1)^j \\ -1 \end{bmatrix}, \label{Re tilde Psi1} \end{align} where $\tilde{\phi}_{(-1)^j}=\tilde{\phi}_+$ for $j$ even and $\tilde{\phi}_{(-1)^j}=\tilde{\phi}_-$ for $j$ odd. It is clear that \eqref{Re hat Psi1}, \eqref{Re tilde Psi1} are of the same form as \eqref{Re hat Psi2}, \eqref{Re tilde Psi2} and thus the result \eqref{RJJ quad-form} follows immediately. The results for $\tilde{R}_{EJ}$ and $\tilde{R}_{JE}$ are obtained similarly. \end{proof}
\begin{remark}\label{rem: phi tphi} To compute $\phi_\pm,\tilde{\phi}_\pm$ more explicitly, we first verify \begin{align}\label{arg Bn}
\mathrm{arg}(B_{n}^{\pm\frac{1}{4}}(x_+))=\pm\begin{cases}
\frac{\pi}{4}, & x\in \mathring{J}, \\
-\frac{\pi}{4}, & x\in \mathring{E}.
\end{cases} \end{align} Using that fact, we have \begin{align*}
\phi_{\pm}(x;\varkappa)&=\Re\left(e^{i\varkappa \Im{\mathfrak g}(x_+)}B_{n}^{\pm\frac 14}(x_+)\right)=|B_{n}^{\pm\frac{1}{4}}(x_+)|\cos\left(\varkappa\Im{\mathfrak g}(x_+)\pm\frac{\pi}{4}\right), & & x\in \mathring{J}, \\
\tilde{\phi}_\pm(x;\varkappa)&=\pm\Re\left(e^{-i\varkappa\Im{\mathfrak g}(x_+)}B_{n}^{\pm\frac{1}{4}}(x_+)\right)=\pm|B_{n}^{\pm\frac{1}{4}}(x_+)|\cos\left(\varkappa\Im{\mathfrak g}(x_+)\pm\frac{\pi}{4}\right), & & x\in\mathring{E}. \end{align*} \end{remark}
Notice that $\vec{f}, \vec{h}$ can be decomposed as \begin{align}\label{fh decomp} \begin{split}
\vec{f}(x;\varkappa)&=\phi_-(x;\varkappa)\begin{bmatrix} \vec{v}_-(x) \\ \vec{0} \end{bmatrix}+\phi_+(x;\varkappa)\begin{bmatrix} \vec{0} \\ \vec{v}_+(x) \end{bmatrix}, \\
\vec{h}(x;\varkappa)&=\tilde{\phi}_-(x;\varkappa)\begin{bmatrix} \vec{v}_-(x) \\ \vec{0} \end{bmatrix}+\tilde{\phi}_+(x;\varkappa)\begin{bmatrix} \vec{0} \\ \vec{v}_+(x) \end{bmatrix}, \end{split} \end{align} where $\vec{0}$ denotes the (appropriately sized) vector of all $0$ and \begin{align}
\vec{v}_-(x):=\begin{bmatrix} \frac{1}{x-b_1} \\ \vdots \\ \frac{1}{x-b_{2\tilde{N}-1}} \end{bmatrix}, ~~~ \vec{v}_+(x):=\begin{bmatrix} \frac{1}{x-b_2} \\ \vdots \\ \frac{1}{x-b_{2N}} \end{bmatrix}. \end{align} Recall from \eqref{M-def} and Proposition \ref{prop: M pos def} that $\mathbb{M}$ is a block diagonal matrix with positive definite blocks $\mathbb{M}_1$, $\mathbb{M}_2$. Let $C_-$, $C_+$, and $C$ denote the upper triangular Cholesky factor {with a positive diagonal} of $\mathbb{M}_1$, $\mathbb{M}_2$, and $\mathbb{M}$, respectively. Then, \begin{align}\label{M Choleksy}
\mathbb{M}=\begin{bmatrix} C_-^tC_- & 0 \\ 0 & C_+^tC_+ \end{bmatrix}=\begin{bmatrix} C_- & 0 \\ 0 & C_+ \end{bmatrix}^t\begin{bmatrix} C_- & 0 \\ 0 & C_+ \end{bmatrix}=:C^tC, \end{align} where $C$ is the Cholesky factor of $\mathbb{M}$. It is to be noted that the upper triangular Cholesky factor {with a positive diagonal} of a positive definite matrix is unique {\cite[Lemma~12.1.6]{MSch02}}. We conclude this Section with the following Proposition.
\begin{proposition}\label{Eprime simple}
Suppose $\lambda\in(-1,1)\setminus\{0\}$, $x,y$ are in a compact subset of $\mathring{J}\cup\mathring{E}$ and, with abuse of notation, $\varkappa=\varkappa(|\lambda|)$. Then, as $\lambda\to0$, we have the identity \begin{align}\label{Eprime approx}
|\lambda|E_{ac}'(x,y;\lambda)=\sum_{j=1}^{n}G_j(x;\lambda)G_j(y;\lambda)+\BigO{\varkappa^{-1}}, \end{align} where $E_{ac}'(x,y;\lambda)$ is the kernel of $\frac{d}{d\lambda}\mathcal{E}_{ac}[f](x;\lambda)$, see \eqref{complete res of id} and the text that follows it, \begin{align}
G_j(x;\lambda):=\chi_E(x)\hat{h}_j(x;\lambda)+\chi_J(x)\hat{f}_j(x;\lambda), \end{align} and \begin{align}
\hat{h}_j(x;\lambda)&=A_j(x)\cos\left(\varkappa\Im{\mathfrak g}(x_+)+s(j)\frac{\pi}{4}\right)s(j), \label{h prop} \\
\hat{f}_j(x;\lambda)&=A_j(x)\cos\left(\varkappa\Im{\mathfrak g}(x_+)+s(j)\frac{\pi}{4}\right)\mathrm{sgn}(\lambda), \label{f prop} \end{align} with $s(j)=-1$ for $j=1,\ldots,\tilde{N}$, $s(j)=1$ for $j=\tilde{N}+1,\ldots,n$, \begin{align}\label{A prop}
A_j(x)=\frac{\sqrt{2}}{\pi}\begin{cases}
|B_{n}^{-\frac{1}{4}}(x_+)|[C_-\vec{v}_-(x)]_j, & j=1,\ldots,\tilde{N}, \\
|B_{n}^{\frac{1}{4}}(x_+)|[C_+\vec{v}_+(x)]_{j-\tilde{N}}, & j=\tilde{N}+1,\ldots,n.
\end{cases} \end{align} \end{proposition}
\begin{proof} From \eqref{deriv res of id kernel}, Proposition \ref{prop: Delta R asymp} and Theorem \ref{theo-dim_n} we find that \begin{multline}
|\lambda|E_{ac}'(x,y;\lambda)=\frac{2}{\pi^2}\left(\vec{h}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{h}(x;\varkappa)\chi_E(x)\chi_E(y)+\mathrm{sgn}(\lambda)\vec{h}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{f}(x;\varkappa)\chi_E(y)\chi_J(x) \right. \\
\left.+\mathrm{sgn}(\lambda)\vec{f}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{h}(x;\varkappa)\chi_J(y)\chi_E(x)+\vec{f}^{\hspace{0.4mm}t}(y;\varkappa)\mathbb{M}\vec{f}(x;\varkappa)\chi_J(x)\chi_J(y)\right)+\BigO{\varkappa^{-1}}. \end{multline} Using the Cholesky decomposition $\mathbb{M}=C^tC$ and letting \begin{align}\label{hf def}
\hat{h}_j(x;\lambda):=\frac{\sqrt{2}}{\pi}[C\vec{h}(x;\varkappa)]_j, \qquad \hat{f}_j(x;\lambda):=\mathrm{sgn}(\lambda)\frac{\sqrt{2}}{\pi}[C\vec{f}(x;\varkappa)]_j, \end{align} we have \begin{align}\label{Eac h f}
|\lambda|E_{ac}'(x,y;\lambda)=\sum_{j=1}^n(\chi_E(x)\hat{h}_j(x;\lambda)+\chi_J(x)\hat{f}_j(x))(\chi_E(y)\hat{h}_j(y;\lambda)+\chi_J(y)\hat{f}_j(y))+\BigO{\varkappa^{-1}}. \end{align} Remark \ref{rem: phi tphi}, \eqref{fh decomp}, \eqref{M Choleksy} are now used to show that $\hat{h}_j(x;\lambda)$ and $\hat{f}_j(x;\lambda)$ defined in \eqref{hf def} satisfy \eqref{h prop}, \eqref{f prop}, and \eqref{A prop}, thereby completing the proof. \end{proof}
\section{Relating the exact resolution of the identity and its approximation}\label{sec: exact to approx}
\subsection{Diagonalization of $\mathscr{K}$} In our case only one interval $U=J\cup E=[a_1,a_2]$ is present. Lemma \ref{lemma: gamma no pole} implies that the eigenvalues of $\mathscr{K}$ do not accumulate at zero, so their number is, at most, finite. By the results in the proof of Theorem~5.5 in \cite{BKT20} (see eq. (5.24) and the text following it), all the eigenfunctions of $\mathscr{K}$ are smooth. By statement (2) of Theorem~\ref{thm: spec prop}, all the eigenspaces are finite-dimensional, and statement (1) of Theorem~\ref{thm:main} is proven.
Let $\mathscr{K}_{ac}$ denote the absolutely continuous part of $\mathscr{K}$, i.e. the part of $\mathscr{K}$ in the subspace $H_{ac}\subset L^2(U)$ of absolute continuity with respect to $\mathscr{K}$. Let $g_1,\dots,g_n\in H_{ac}$ be a generating basis for $\mathscr{K}_{ac}$. Consider the matrix distribution function \begin{equation}\label{S-fn} S(\lambda)=(\mathcal E(\lambda) g_j,g_k)_{j,k=1}^n. \end{equation} In \cite{BKT20} the authors construct a self-adjoint operator $B:L^2([-1,1],\mathbb R^n)\to L^2([-1,1],\mathbb R^n)$, which is unitarily equivalent to $\mathscr{K}_{ac}$, i.e. $B=W^\dagger\mathscr{K}_{ac}W$ for a unitary $W:L^2([-1,1],\mathbb R^n)\to H_{ac}$, see \cite[eq. (6.20)]{BKT20}. Moreover, $B\tilde h=\lambda\tilde h$, where $\tilde h$ is a $\mathbb R^n$-valued function of $\lambda$. Clearly, the spectral family of $B$ satisfies ${\mathcal V}(\lambda) \tilde h=\chi_\lambda \tilde h$, where $\chi_\lambda$ is the characteristic function of $(-\infty,\lambda]$ (see, e.g. \cite[Theorem 7.18]{weidm80}). Thus, we can find a convenient generating basis for $B$. Set $\tilde h_j(\lambda):=\vec e_j$, $\lambda\in[-1,1]$, where $\vec e_j$ is the $j$-th standard basis vector in $\mathbb R^n$. Then the collection $\tilde h_1,\dots,\tilde h_n\in L^2([-1,1],\mathbb R^n)$ is a generating basis with the property \begin{equation}\label{gen_set}
\frac{d}{d\lambda}({\mathcal V}(\lambda) \tilde h_j,\vec h_k)=\delta_{jk},\ 1\leq j,k\leq n,\ \lambda\in [-1,1]. \end{equation} As is easily seen, ${\mathcal V}(\lambda)=W^\dagger\mathcal E(\lambda)W$, the collection $g_j=W\tilde h_j$, $1\leq j\leq n$, is a generating basis for $\mathscr{K}_{ac}$, and \begin{equation}\label{S-id}
S'_{jk}(\lambda)=\frac{d}{d\lambda}(\mathcal E(\lambda) g_j,g_k)=\delta_{jk},\ 1\leq j,k\leq n,\ \lambda\in [-1,1]. \end{equation}
By \cite[Theorem p. 291]{ag80}, there is a unitary $T_{ac}:\,H_{ac}\to L^2([-1,1],\mathbb R^n)$, $T_{ac}\phi=\tilde\phi$, so that \begin{equation} \begin{split}\label{isometry}
&\phi=\sum_{j=1}^n \int_{-1}^1 \tilde \phi_j(\lambda) d\mathcal E(\lambda) g_j, ~~~ \phi\in H_{ac},\quad \tilde \phi=(\tilde \phi_1,\dots,\tilde \phi_n)\in L^2([-1,1],\mathbb R^n).
\end{split} \end{equation} Moreover, $T_{ac}\mathscr{K}_{ac}T_{ac}^\dagger\tilde\phi=\lambda\tilde\phi$, $\tilde\phi\in L^2([-1,1],\mathbb R^n)$. By looking at \begin{equation}\label{isometry-distr} (\phi,f)=\sum_{j=1}^n \int_{-1}^1 \tilde \phi_j(\lambda) d(\mathcal E(\lambda) g_j,f),\ \phi,f\in H_{ac}, \end{equation} we conclude by duality that \begin{equation}\label{isometry-3-alt} T_{ac}f=\left((\mathcal E'(\lambda) g_1,f),\dots,(\mathcal E'(\lambda) g_n,f)\right),\ f\in H_{ac}. \end{equation} Fix any $g\in H_{ac}$. Since $(\mathcal E(\lambda) g,f)=(\mathcal E(\lambda) g,f_{ac})$ for any $f\in L^2(U)$, the above formula naturally extends $T_{ac}$ to all of $L^2(U)$. In particular, $T_{ac}f=0$ if $f\in H_p$. The convergence of the integral in \eqref{isometry-distr} follows from the inequality \begin{equation}\label{E-ineq}
|d(\mathcal E(\lambda) g,f)|\leq \left[(d\mathcal E(\lambda) g,g)d(\mathcal E(\lambda) f,f)\right]^{1/2},\ g,f\in H_{ac}, \end{equation} see \cite[p. 356]{Kato}. Here we used \eqref{S-id} and that $\tilde\phi\in L^2([-1,1],\mathbb R^n)$.
Of interest to us are the following operators that make up the extended $T_{ac}$: \begin{equation}\label{isometry-part} {\mathcal Q}_j: L^2(U)\to L^2([-1,1]),\ {\mathcal Q}_j f:=(\mathcal E'(\lambda) g_j,f),\ 1\leq j\leq n, \end{equation} which are bounded, because $T_{ac}$ is bounded.
Each operator ${\mathcal Q}_j$ can be viewed as a continuous map $C_0^\infty(\mathring{J}\cup \mathring{E})\to \mathcal D'([-1,1])$. Denote by $Q_j(x;\lambda):=\mathcal E'(\lambda) g_j$ the Schwartz kernel of ${\mathcal Q}_j$ (see \cite[Section 5.2]{hor1}). For any interval $\Delta\subset [-1,1]\setminus \{0\}$, \begin{equation}\begin{split}\label{dot-prod} (\mathcal E_{ac}(\Delta)\phi,f)=&(\mathcal E_{ac}(\Delta)\phi_{ac},f)= \int_{\Delta}(\mathcal E_{ac}'(\lambda)\phi_{ac},f)d\lambda=\int_\Delta \sum_j \tilde\phi_j(\lambda)(\mathcal E'(\lambda)g_j,f)d\lambda\\ =&\int_\Delta \sum_j (\mathcal E'(\lambda)g_j,\phi)(\mathcal E'(\lambda)g_j,f)d\lambda,\quad \phi,f\in C_0^\infty(\mathring{J}\cup \mathring{E}), \end{split} \end{equation} where $\tilde\phi=T_{ac}\phi$.
Therefore, the kernel of $\mathcal E_{ac}'(\lambda)$ can be written in the form \begin{equation}\label{resol}
E_{ac}'(x,y;\lambda)=\sum_{j=1}^n Q_j(x;\lambda)Q_j(y;\lambda),\ |\lambda|\in (0,1]. \end{equation} The above equality is understood in the weak sense, i.e. given any $\phi,f\in C_0^\infty(\mathring{J}\cup \mathring{E})$, one has \begin{equation}\label{resol-weak} \int_U\int_U E_{ac}'(x,y;\lambda)\phi(x)f(y)dxdy=\sum_{j=1}^n \int_U Q_j(x;\lambda)\phi(x)dx\int_U Q_j(y;\lambda)f(y)dy,\
|\lambda|\in (0,1]. \end{equation}
The left side of \eqref{resol-weak} is a $C^\infty([-1,1]\setminus\{0\})$ function by Proposition~\ref{prop: smoothness of res of id}.
The right side is in $L^1([-1,1])$, because ${\mathcal Q}_j:\,L^2(U)\to L^2([-1,1])$, $1\leq j\leq n$, are all bounded.
As is well-known, the decomposition in \eqref{resol} is not unique. Let $V(\lambda)$ be an orthogonal matrix-valued function. Set $\vec Q^{(1)}(x;\lambda):=V(\lambda)\vec Q(x;\lambda)$, where $\vec Q(x;\lambda)=(Q_1(x;\lambda),\dots,Q_n(x;\lambda))^t$. Clearly, \eqref{resol} does not change its form if $\vec Q$ is replaced with $\vec Q^{(1)}$: \begin{equation}\label{resol-1}
E_{ac}'(x,y;\lambda)=\sum_{j=1}^n Q_j^{(1)}(x;\lambda)Q_j^{(1)}(y;\lambda),\ |\lambda|\in (0,1]. \end{equation}
Let $V:\,L^2([-1,1],{\mathbb R}^n)\to L^2([-1,1],{\mathbb R}^n)$ be the operator of multiplication with $V(\lambda)$. Clearly, $V$ is unitary. Define \begin{equation}\label{isometry-part-alt}\begin{split} &T_{ac}^{(1)} f:=V T_{ac}f,\\ &(T_{ac}^{(1)} f)(\lambda)=\left(\int_U Q_1^{(1)}(x;\lambda) f(x)dx,\dots,\int_U Q_n^{(1)}(x;\lambda) f(x)dx\right):\ L^2(U)\to L^2([-1,1],{\mathbb R}^n). \end{split} \end{equation} By construction, $T_{ac}\mathscr{K}_{ac} T_{ac}^\dagger \tilde f=\lambda\tilde f$ for any $\tilde f\in L^2([-1,1],{\mathbb R}^n)$. Therefore \begin{equation}\label{diag-remains} T_{ac}^{(1)}\mathscr{K}_{ac} (T_{ac}^{(1)})^\dagger =VT_{ac}\mathscr{K}_{ac} T_{ac}^\dagger V^\dagger=V\lambda V^\dagger=\lambda \end{equation} on $L^2([-1,1],{\mathbb R}^n)$. Thus, by appending an appropriate projector onto $H_p$ (the subspace of discontinuity with respect to $\mathscr{K}$) to $T_{ac}^{(1)}$, we obtain a unitary transformation $T^{(1)}$ that also canonically diagonalizes $\mathscr{K}$. Our argument proves statement (2) of Theorem~\ref{thm:main}.
\subsection{Piecewise smooth diagonalization of $\mathscr{K}$}\label{sec:diagK}
For another set of functions $h_j\in C_0^\infty(\mathring{J}\cup\mathring{E})$, $1\leq j\leq n$, let $h_{ac}^j$ be the orthogonal projection of $h_j$ onto $H_{ac}$. By \cite[eq. (5.11), p. 355]{Kato}, \begin{equation}\label{diff Epr form}
\int_{-1}^{1}\left|(\mathcal E'(\lambda) g_j,g_k)-(\mathcal E'(\lambda) h_{ac}^j,h_{ac}^k)\right|d\lambda \leq \Vert g_j\Vert \Vert g_k-h_{ac}^k\Vert+\Vert g_j-h_{ac}^j\Vert\Vert g_k\Vert. \end{equation} Similarly to \eqref{S-fn}, denote $S_{\vec h}(\lambda)=(\mathcal E_{ac}(\lambda) h_j,h_k)_{j,k=1}^n$. Since $\Vert g_j-h_{ac}^j\Vert\leq \Vert g_j-h_j\Vert$ and $\det(S_{\vec h}(\lambda))$ is analytic in $\lambda$ by Proposition~\ref{prop: smoothness of res of id}, we can find $h_j\in C_0^\infty(\mathring{J}\cup\mathring{E})$ with $\Vert h_j-g_j\Vert\ll 1$, $1\leq j\leq n$, such that $\det\left(S_{\vec h}'(\lambda)\right)>0$ except at a countable collection of $\lambda$, which may accumulate at $\lambda=0$.
Let $\Lambda$ be the open interval between any two consecutive roots (either positive or negative) of $\det\left(S_{\vec h}'(\lambda)\right)$. Combined with \cite[Lemma~12.1.6]{MSch02}, this implies that there exists a Cholesky factor $\hat Q^{(1)}(\lambda)$ of $S_{\vec h}'(\lambda)$ so that $\hat Q^{(1)}(\lambda)$ is a smooth matrix-valued function of $\lambda\in\Lambda$, and $\det (\hat Q^{(1)}(\lambda))>0$, $\lambda\in\Lambda$. To be precise, \cite[Lemma~12.1.6]{MSch02} establishes only continuity, but the same proof can be modified to show that $\hat Q^{(1)}(\lambda)$ is differentiable any number of times as long as $S_{\vec h}'(\lambda)$ is smooth and nondegenerate.
By \eqref{resol-weak}, $\hat Q(\lambda)$, where \begin{equation}\label{Qphi-appl}
\hat Q(\lambda):={\mathcal Q}(\lambda)\vec h,\ {\mathcal Q}(\lambda):=({\mathcal Q}_1(\lambda),\dots,{\mathcal Q}_n(\lambda))^t,\ \vec h:=(h_1,\dots,h_n),\ |\lambda|\in(0,1], \end{equation} (cf. \eqref{isometry-part} for the definition of the operators ${\mathcal Q}_j$), is also a Cholesky factor of $S_{\vec h}'(\lambda)$. By replacing $Q_1(x;\lambda)$ with $-Q_1(x;\lambda)$ for $\lambda\in\Lambda$ wherever necessary, we can ensure that $\det (\hat Q(\lambda))>0$ for all $\lambda\in\Lambda$. Therefore \begin{equation}\label{two-chols}\begin{split} S_{\vec h}'(\lambda)=(\hat Q^{(1)})^t(\lambda)\hat Q^{(1)}(\lambda)=\hat Q^t(\lambda)\hat Q(\lambda),\ \lambda\in\Lambda.
\end{split} \end{equation}
\begin{lemma}\label{lem:can_chol}
If $\det \hat M\not=0$, and $\hat Q_1$ and $\hat Q_2$ are two Cholesky factors of $\hat M$, then $\hat Q_2=V \hat Q_1$ for some orthogonal matrix $V$. \end{lemma} \noindent The proof is immediate, because $V=\hat Q_2\hat Q_1^{-1}$ satisfies $V^tV={\bf 1}$.
By the Lemma, $\hat Q^{(1)}(\lambda)=V(\lambda)\hat Q(\lambda)$ for an orthogonal matrix-valued function $V(\lambda)$, $\lambda\in\Lambda$. Clearly, $V(\lambda)\in SO(n)$ (i.e., $\det(V(\lambda))\equiv1$), because the determinants of $\hat Q^{(1)}(\lambda)$ and $\hat Q(\lambda)$ are positive. Define a new kernel by $\vec Q^{(1)}(x;\lambda):=V(\lambda)\vec Q(x;\lambda)$. Then, by \eqref{Qphi-appl}, \begin{equation}\label{two-Qs} {\mathcal Q}^{(1)}(\lambda)\vec h=V(\lambda){\mathcal Q}(\lambda)\vec h=V(\lambda)\hat Q(\lambda)=\hat Q^{(1)}(\lambda). \end{equation} Recall that ${\mathcal Q}$ and ${\mathcal Q}^{(1)}$ are column-vectors consisting of the operators ${\mathcal Q}_j$ and ${\mathcal Q}_j^{(1)}$, respectively (cf. \eqref{Qphi-appl}). As mentioned above (see \eqref{resol-1} and the text preceding it), the kernel $\vec Q^{(1)}(x;\lambda)$ of ${\mathcal Q}^{(1)}$ satisfies \eqref{resol-1} for $\lambda\in\Lambda$.
Consider the equations \begin{equation}\label{smoothness-2}
(\mathcal E_{ac}'(\lambda)h_m,f) =\sum_{j=1}^n ({\mathcal Q}_j^{(1)} h_m)(\lambda)({\mathcal Q}_j^{(1)}f)(\lambda),\ m=1,\dots, n, \end{equation} for any $f\in C_0^\infty(\mathring{J}\cup \mathring{E})$ and $\lambda\in\Lambda$. We solve these equations for $({\mathcal Q}_j^{(1)}f)(\lambda)$, $j=1,\dots, n$, which can be done because the matrix of the system is $\hat Q^{(1)}(\lambda)$ (cf. \eqref{two-Qs}). The latter is nondegenerate by construction. By the Schwartz kernel Theorem \cite[Theorem 5.2.1]{hor1}, $Q_j^{(1)}\in C^{\infty}\left((\mathring{J}\cup \mathring{E})\times\Lambda\right)$. This claim follows, because $\hat Q^{(1)}\in C^{\infty}(\Lambda)$ by construction.
Repeating the above argument for all intervals $\Lambda\subset (-1,1)\setminus\{0\}$ such that $\det\left(S_{\vec h}'(\lambda)\right)>0$, $\lambda\in\Lambda$, we construct the kernel $\vec Q^{(1)}(x;\lambda)$ in \eqref{resol-1}, which is smooth except for a countable collection of $\lambda$, which may accumulate at $\lambda=0$. Next we show that such accumulation does not happen.
In Proposition~\ref{Eprime simple} we find another smooth kernel $L\in C^\infty\left((\mathring{J}\cup \mathring{E})\times (\mathring{J}\cup \mathring{E})\times ((-1,1)\setminus\{0\})\right)$, which is close to $|\lambda| E'$ pointwise for all $|\lambda|>0$ sufficiently small (in this case $E'\equiv E_{ac}'$) and admits an expansion similar to \eqref{resol}: \begin{equation}\label{resol-pr}
L(x,y;\lambda)=\sum_{j=1}^n G_j(x;\lambda)G_j(y;\lambda),\, G_j\in C^\infty\left((\mathring{J}\cup \mathring{E})\times ((-1,1)\setminus\{0\})\right),\, x,y\in \mathring{J}\cup \mathring{E},\, |\lambda|\in (0,1). \end{equation}
\begin{lemma} \label{lem:le_appr} Given any interval $I:=[x_l,x_r]\subset \mathring{J}\cup \mathring{E}$, one has \begin{equation}\label{prox}
\max_{x,y\in I}\left||\lambda|E'(x,y;\lambda)-L(x,y;\lambda)\right|=\BigO{\varkappa^{-1}},\ \varkappa=-\ln\left|{\lambda}/{2}\right|\to\infty.
\end{equation} Also, there exist $\delta,c,\lambda_0>0$ such that for any $\lambda'$, $0 <|\lambda'|< \lambda_0$, one can find $x_k(\lambda)\in I$, $1\leq k\leq n$, which are all smooth in a neighborhood of $\lambda'$, and \begin{equation}\label{det-cond}
\det\left(\hat L(\lambda)\right)>\delta,\ \Vert \hat L(\lambda)\Vert\leq c,\ 0 <|\lambda|\leq \lambda_0, \end{equation} where $\hat L(\lambda):=[L(x_m(\lambda),x_k(\lambda);\lambda)]_{m,k=1}^n$. \end{lemma} \begin{proof} The property \eqref{prox} is proven in Proposition~\ref{Eprime simple}, see \eqref{Eprime approx}. Now we prove \eqref{det-cond}.
By Proposition~\ref{Eprime simple}, $G_j(x;\lambda)$ can be written in the form: \begin{equation}\label{appr-fns}
G_j(x;\lambda)=\mathcal A_j^\pm(x)\cos(\varkappa {\mathfrak g}_{\text{im}}(x)+c_j),\ 1\leq j\leq n,\ {\mathfrak g}_{\text{im}}(x):=\Im({\mathfrak g}(x_+)),
\ x\in I, \end{equation} for some real-valued functions $\mathcal A_j^\pm$ and real constants $c_j$, where the functions $\mathcal A_j^\pm(x)$ are analytic in a neighborhood of $I$ and linearly independent. The signs `$+$' and `$-$' in the superscript are taken if $\lambda>0$ and $\lambda<0$, respectively. Using \eqref{h prop}, \eqref{f prop} we have \begin{equation}\label{Ac coefs} \mathcal A_j^\pm(x)=A_j(x)\begin{cases} s(j),& x\in E,\\ \mathrm{sgn}(\lambda),& x\in J, \end{cases}\quad c_j=s(j)\frac\pi4. \end{equation}
Since $\mathcal A_j^\pm(x)$ are linearly independent (as analytic functions), we can find $n$ points $\check x_k\in \mathring{I}$ so that the vectors $(\mathcal A_j^\pm(\check x_1),\dots,\mathcal A_j^\pm(\check x_n))$, $1\leq j\leq n$, are linearly independent. Here and below, the cases $\lambda>0$ and $\lambda<0$ are treated separately.
Find $c_0$ so that $\cos(c_0+c_j)\not=0$ for any $j$. Define $N_k(\varkappa):=\lfloor (\varkappa {\mathfrak g}_{\text{im}}(\check x_k)-c_0)/(2\pi)\rfloor$ and find $x_k(\lambda)\in I$ by solving ${\mathfrak g}_{\text{im}}(x)=(c_0+2\pi N_k(\varkappa))/\varkappa$. Then \begin{equation}\label{g-root finding} {\mathfrak g}_{\text{im}}'(\check x_k)\Delta x+O(\Delta x^2)=\frac{2\pi \left[N_k(\varkappa)-(\varkappa{\mathfrak g}_{\text{im}}(\check x_k)-c_0)/(2\pi)\right]}{\varkappa}=\frac{O(1)}\varkappa, \end{equation} where $\Delta x=x_k(\lambda)-\check x_k$. The magnitude of the $O(1)$ term in \eqref{g-root finding} does not exceed $2\pi$, so \begin{equation}\label{delta x est}
|\Delta x|\lesssim \frac{2\pi}{\min_{x\in I}|{\mathfrak g}_{\text{im}}'(x)|}\frac1\varkappa.
\end{equation} Since $I$ is at a positive distance from the endpoints of $J$ and $E$, statements (3) and (4) of Proposition~\ref{prop: g-function} imply that the denominator in \eqref{delta x est} is positive. Since $\check x_k\in \mathring{I}$ are all fixed, find $\varkappa_0=-\ln(\lambda_0/2)$ large enough so that $x_k(\lambda)\in I$, $1\leq k\leq n$, if $|\lambda|\geq\lambda_0$.
By construction, $x_k(\lambda)$ are piecewise smooth functions of $\lambda$, and they satisfy $\varkappa {\mathfrak g}_{\text{im}}(x_k(\lambda))\equiv c_0\,\hbox{mod}\, 2\pi$ and $x_k(\lambda)\to \check x_k$, $\lambda\to 0$, $1\leq k\leq n$.
Clearly, \begin{equation}\label{appr_fns} (G_j(x_k(\lambda);\lambda)_{j,k=1}^n=\left(\mathcal A_j^\pm(x_k(\lambda))\cos(c_0+c_j)\right)_{j,k=1}^n \to \left(\mathcal A_j^\pm(\check x_k)\cos(c_0+c_j)\right)_{j,k=1}^n \text{ as }\lambda\to0, \end{equation} and the last matrix above is non-degenerate by construction. Decreasing $\lambda_0>0$ even further if necessary, all the inequalities in \eqref{det-cond} follow from \eqref{resol-pr} and the definition of $\hat L(\lambda)$ in Lemma~\ref{lem:le_appr}.
The $x_k(\lambda)\in I$ are smooth at $\lambda'$ if $\varkappa'=-\ln|\lambda'/2|$ is not a jump point of $N_k(\varkappa)$ for all $1\leq k\leq n$. Suppose $N_k(\varkappa)$ is discontinuous at $\varkappa'$ for some $k$. At each such point the value of the jump equals $1$ (if ${\mathfrak g}_{\text{im}}(\check x_k)>0$) or $-1$ (if ${\mathfrak g}_{\text{im}}(\check x_k)<0$).
If ${\mathfrak g}_{\text{im}}(\check x_k)=0$, $N_k(\varkappa)$ is constant. To avoid the discontinuity at $\varkappa'$, we redefine $N_k(\varkappa)$ by keeping it constant $N_k(\varkappa)=N_k(\varkappa'-0)$, $\varkappa\geq\varkappa'$, until the next discontinuity, where it jumps by 2 or -2. The rest of $N_k(\varkappa)$ is unchanged.
If several $N_k(\varkappa)$ are discontinuous at $\varkappa'$, each one is modified the same way. If a modification is performed, $O(1)$ in \eqref{g-root finding} is bounded by $4\pi$, so the modified $x_k(\lambda)$ still satisfy \eqref{det-cond}. \end{proof}
The following Corollary follows immediately from Lemma~\ref{lem:le_appr}.
\begin{corollary} \label{cor:he-nondeg} Given any interval $I:=[x_l,x_r]\subset \mathring{J}\cup \mathring{E}$, there exist $\tilde \delta,\tilde c,\tilde \lambda_0>0$, such that for any $\lambda'$, $0 <|\lambda'|<\tilde \lambda_0$, \begin{equation}\label{det-cond-E}
\det\left(\hat E'(\lambda)\right)>\tilde \delta,\ \Vert \hat E'(\lambda)\Vert\leq \tilde c,\ |\lambda|\in(0,\tilde \lambda_0],
\end{equation} where $\hat E'(\lambda):=(|\lambda|E'(x_m(\lambda),x_k(\lambda);\lambda)_{m,k=1}^n)$, and $x_k(\lambda)\in I$ are the same as in Lemma~\ref{lem:le_appr}.
\end{corollary}
Note that the factor $|\lambda|$ is included in the definition of $\hat E'(\lambda)$.
By Corollary~\ref{cor:he-nondeg} and \cite[Lemma~12.1.6]{MSch02}, there exists a Cholesky factor $\hat Q^{(2)}(\lambda)$ of $|\lambda|^{-1}\hat E'(\lambda)$, $|\lambda|\in(0,\tilde \lambda_0]$, so that $\hat Q^{(2)}(\lambda)$ is a piecewise smooth matrix-valued function of $\lambda$, and $\det (\hat Q^{(2)}(\lambda))>0$. The factor is smooth wherever $x_k(\lambda)$ are all smooth and, therefore, $\hat E'(\lambda)$ is smooth.
Pick any $\lambda'$, $|\lambda'|\in (0,\tilde\lambda_0]$, where $\vec Q^{(1)}(x;\lambda)$ constructed above (see \eqref{smoothness-2} and the text that follows it) is not smooth. By Corollary~\ref{cor:he-nondeg}, we can find $x_k(\lambda)\in I$, $1\leq k\leq n$, which are smooth in $(\lambda'-\varepsilon,\lambda'+\varepsilon)$ for some small $\varepsilon>0$. By looking at the matrix $|\lambda|^{-1}\hat E'(\lambda)$ and arguing similarly to \eqref{two-chols}--\eqref{smoothness-2}, we get the kernel $\vec Q^{(2)}(x;\lambda)$, which is smooth in this interval.
Consider the interval $(\lambda'-\varepsilon,\lambda')$. On this interval we have two kernels, which are related as follows $\vec Q^{(1)}(x;\lambda)=V_-(\lambda)\vec Q^{(2)}(x;\lambda)$, $\lambda\in(\lambda'-\varepsilon,\lambda')$, for some $SO(n)$-valued function $V_-(\lambda)$. This follows, because \begin{equation}\label{Q demonstr I}\begin{split} &E'(x_m(\lambda),x_k(\lambda);\lambda)=\sum_{j=1}^n Q_j^{(1)}(x_m(\lambda);\lambda)Q_j^{(1)}(x_k(\lambda);\lambda)=\sum_{j=1}^n Q_j^{(2)}(x_m(\lambda);\lambda)Q_j^{(2)}(x_k(\lambda);\lambda),\\ &E'(x_m(\lambda),x;\lambda)=\sum_{j=1}^n Q_j^{(1)}(x_m(\lambda);\lambda)Q_j^{(1)}(x;\lambda)=\sum_{j=1}^n Q_j^{(2)}(x_m(\lambda);\lambda)Q_j^{(2)}(x;\lambda),\\ &1\leq k,m\leq n,\ x\in \mathring{J}\cup \mathring{E},\ \lambda\in(\lambda'-\varepsilon,\lambda'). \end{split} \end{equation} From the first line we obtain that the matrices $(Q_j^{(1)}(x_m(\lambda);\lambda))$ and $(Q_j^{(2)}(x_m(\lambda);\lambda))$ are non-degenerate and are, therefore, related by some $V_-(\lambda)$. As before, we can ensure that the determinant of the second matrix is positive (the determinant of the first matrix is positive by construction). Solving the two equations on the second line for $Q_j^{(1)}(x;\lambda)$ and $Q_j^{(2)}(x;\lambda)$ gives the desired relation.
Since $V_-(\lambda)$ can be changed arbitrarily, we replace it with a smooth $SO(n)$-valued function $\tilde V(\lambda)$, which equals ${\bf 1}$ on $(\lambda'-\varepsilon/3,\lambda')$ and equals $V_-(\lambda)$ on $(\lambda'-\varepsilon,\lambda'-2\varepsilon/3)$. Here we use that $SO(n)$ is connected. The same trick can be applied in $(\lambda',\lambda'+\varepsilon)$, which gives another $SO(n)$-valued function $V_+(\lambda)$ defined on $(\lambda',\lambda'+\varepsilon)$. Smoothly modifying it as well in a similar fashion, we get a smooth function $\tilde V(\lambda)$ defined on $(\lambda'-\varepsilon,\lambda'+\varepsilon)$ and a kernel $\vec Q^{(0)}(x;\lambda)=\tilde V(\lambda)\vec Q^{(2)}(x;\lambda)$, which is smooth across $\lambda'$ and smoothly transitions to $\vec Q^{(1)}(x;\lambda)$ on either side of $\lambda'$. Thus, all the discontinuities of $\vec Q^{(1)}(x;\lambda)$ sufficiently close to $\lambda=0$ can be eliminated.
In what follows, the obtained kernel will be denoted $\vec Q^{(0)}(x;\lambda)$. By construction, $Q_j^{(0)}\in\linebreak C^{\infty}\left((\mathring{J}\cup\mathring{E})\times \left((-1,1)\setminus\Xi\right)\right)$, $1\leq j\leq n$, where $\Xi$ is a finite set of points. Differentiating \eqref{resol} with $\vec Q=\vec Q^{(0)}$ with respect to $x$ and $y$ and setting $x=y$ gives \begin{equation}\label{resol diff}
\frac{\partial^n}{\partial x^n}\frac{\partial^n}{\partial y^n} E_{ac}'(x,y;\lambda)|_{y=x}=\sum_{j=1}^n \left(\frac{\partial^nQ_j^{(0)}(x;\lambda)}{\partial x^n} \right)^2,\ |\lambda|\in (0,1]. \end{equation} The property \eqref{E phi 3 II} now follows immediately from Proposition~\ref{prop: smoothness of res of id}. Our argument proves statement (3) of Theorem~\ref{thm:main}.
\subsection{Approximating a smooth diagonalization of $\mathscr{K}$}
Here we construct the functions $Q_j$, which have the properties stated in statement (3) of Theorem~\ref{thm:main} and are close to $G_j$. In this subsection we always assume $|\lambda|\in (0,\tilde\lambda_0]$. In this subsection we can use any collection $x_k(\lambda)$, $1\leq k\leq n$, constructed in Lemma~\ref{lem:le_appr}. There is no need to adjust any of the $x_k(\lambda)$ to ensure its continuity at some $\lambda'$.
By Lemma~\ref{lem:le_appr}, Corollary~\ref{cor:he-nondeg}, and \cite[Lemma~12.1.6]{MSch02}, there exist Cholesky factors $\hat G^{(1)}(\lambda)$ and $\hat Q^{(1)}(\lambda)$ of $\hat L(\lambda)$ and $\hat E'(\lambda)$, respectively, so that $\hat G^{(1)}(\lambda)$ and $\hat Q^{(1)}(\lambda)$ are piecewise smooth matrix-valued functions of $\lambda$, and $\det (\hat G^{(1)}(\lambda)),\det (\hat Q^{(1)}(\lambda))>0$. Both factors are smooth wherever $x_k(\lambda)$ are all smooth and, therefore, $\hat L(\lambda)$ and $\hat E'(\lambda)$ are smooth. We may assume that both $\hat Q^{(1)}(\lambda)$ and $\hat G^{(1)}(\lambda)$ are upper triangular. This can be achieved by applying the appropriate numerical algorithm to $\hat L(\lambda)$ and $\hat E'(\lambda)$ (e.g., as in the proof of \cite[Lemma~12.1.6]{MSch02}). The required form of the Cholesky factors guarantees that if $\hat L(\lambda)$ and $\hat E'(\lambda)$ are close, then $\hat G^{(1)}(\lambda)$ and $\hat Q^{(1)}(\lambda)$ are close as well. Using the same argument as in the previous subsection, these factors lead to the kernels $\vec G^{(1)}(x;\lambda)=V_G(\lambda)\vec G(x;\lambda)$ and $\vec Q^{(1)}(x;\lambda)=V_Q(\lambda)\vec Q^{(0)}(x;\lambda)$ for some $SO(n)$-valued functions $V_G(\lambda)$, $V_Q(\lambda)$.
In this subsection we do not divide $\hat E'$ by $|\lambda|$ when computing its Cholesky factors, so $\hat Q^{(1)}(\lambda)$ here has an extra factor $|\lambda|^{1/2}$ compared with an analogous Cholesky factor of the previous subsection.
The kernels $\vec Q^{(1)}(x;\lambda)$ and $\vec G^{(1)}(x;\lambda)$ are smooth in $x\in\mathring{J}\cup \mathring{E}$ and piecewise-smooth in $\lambda$. The kernel $\vec Q^{(0)}(x;\lambda)$ has at most finitely many jumps in $\lambda$, and $\vec G(x;\lambda)$ (cf. \eqref{resol-pr}) is smooth. Additional jumps in $\lambda$ in $\vec Q^{(1)}(x;\lambda)$ and $\vec G^{(1)}(x;\lambda)$ arise due to the discontinuities in $x_k(\lambda)$, and the latter are reflected in the discontinuities of $V_G(\lambda)$ and $V_Q(\lambda)$.
Let us now formulate a result on stability of the Cholesky factorization. Let $\hat L$ be a positive definite symmetric matrix, and $\hat G$ be its upper triangular Cholesky factor. Let $\Delta \hat L$ and be a symmetric perturbation of $\hat L$, and $\Delta \hat G$ be the corresponding upper triangular perturbation of $\hat G$. Suppose \begin{equation}\label{chol-pert} \varepsilon:=\frac{\Vert \Delta \hat L\Vert_F}{\Vert \hat L\Vert_2},\ \kappa_2:=\Vert \hat L\Vert_2\Vert \hat L^{-1}\Vert_2,\ \kappa_2\varepsilon <1. \end{equation} Here $\Vert \cdot \Vert_{F}$ denotes the Frobenius norm of a matrix, and $\Vert \cdot \Vert_2$ is the matrix 2-norm. Then \cite{sun91, ste93} \begin{equation}\label{chol-pert-bnd} \frac{\Vert \Delta \hat G\Vert_F}{\Vert \hat G\Vert_2}\leq \frac1{\sqrt2}\kappa_2\varepsilon+\BigO{\varepsilon}^2. \end{equation} In what follows we do not specify which matrix norm is used, since they are all equivalent for our purposes. By \eqref{prox} and \eqref{det-cond}, \eqref{chol-pert} is satisfied, where $\Delta \hat L(\lambda):=\hat E'(\lambda)-\hat L(\lambda)$ and $\varepsilon=\BigO{\varkappa^{-1}}$. Equation \eqref{chol-pert-bnd} gives \begin{equation}\label{pert-bnd} \frac{\Vert \hat Q(\lambda) - \hat G(\lambda)\Vert}{\Vert \hat G(\lambda)\Vert} = \BigO{\varkappa^{-1}}. \end{equation} Consequently, \eqref{prox}, \eqref{det-cond}, and \eqref{pert-bnd} imply \begin{equation}\label{invm-bnd} \Vert \hat Q^{-1}(\lambda) - \hat G^{-1}(\lambda)\Vert = \BigO{\varkappa^{-1}}. \end{equation}
For any $x\in \mathring{J}\cup \mathring{E}$, consider two systems of equations \begin{equation}\label{resol-pr-2}\begin{split} L(x_k(\lambda),x;\lambda)=&\sum_{j=1}^n G_{j}^{(1)}(x_k(\lambda);\lambda)G_{j}^{(1)}(x;\lambda),\ 1\leq k\leq n,\\
|\lambda|E'(x_k(\lambda),x;\lambda)=&\sum_{j=1}^n |\lambda|^{1/2}Q_{j}^{(1)}(x_k(\lambda);\lambda)|\lambda|^{1/2}Q_{j}^{(1)}(x;\lambda),\ 1\leq k\leq n, \end{split}
\end{equation} which are solved for the vectors $\vec G^{(1)}(x;\lambda)$ and $|\lambda|^{1/2}\vec Q^{(1)}(x;\lambda)$, respectively. The matrices of the two systems are $\hat G^{(1)}(\lambda)$ and $\hat Q^{(1)}(\lambda)$, respectively. This follows similarly to \eqref{two-Qs}. Combining \eqref{prox} and \eqref{invm-bnd} gives the desired bound \begin{equation}\label{final-bnd}
\Vert \vec G^{(1)}(x;\lambda)-|\lambda|^{1/2}\vec Q^{(1)}(x;\lambda)\Vert = \BigO{\varkappa^{-1}}, \end{equation} which is uniform with respect to $x$ in compact subsets of $\mathring{J}\cup \mathring{E}$.
Recall that left multiplication with $V_G^t(\lambda)$ converts $\vec G^{(1)}(x;\lambda)$ back into the original kernel $\vec G(x;\lambda)$ of \eqref{resol-pr}: $\vec G(x;\lambda)=V_G^t(\lambda)\vec G^{(1)}(x;\lambda)$. Define $\vec Q^{(2)}(x;\lambda):=V_G^t(\lambda)\vec Q^{(1)}(x;\lambda)$.
Multiplication with $V_G^t(\lambda)$ does not affect \eqref{final-bnd}, which therefore holds for the pair $\vec G(x;\lambda)$, $\vec Q^{(2)}(x;\lambda)$. As was mentioned above, $V_G^t(\lambda)$ is smooth unless one or more of $x_k(\lambda)$ have a jump, and $\vec Q^{(1)}(x;\lambda)$ is smooth in $x\in\mathring{J}\cup \mathring{E}$ and piecewise-smooth in $\lambda$. Therefore, $\vec Q^{(2)}(x;\lambda)$ has the same properties.
Finally, we modify $\vec Q^{(2)}(x;\lambda)$ slightly, so that it still satisfies \eqref{resol}, remains close to $\vec G(x;\lambda)$, but is smooth in $\lambda$ for $\lambda$ close to zero. By construction, $\vec Q^{(2)}(x;\lambda)=V(\lambda)\vec Q^{(0)}(x;\lambda)$, where $V(\lambda)=V_G^t(\lambda)V_Q(\lambda)$, and $\Vert \vec G(x;\lambda)-V(\lambda)|\lambda|^{1/2}\vec Q^{(0)}(x;\lambda)\Vert = \BigO{\varkappa^{-1}}$. Both $\vec G(x;\lambda)$ and $\vec Q^{(0)}(x;\lambda)$ are smooth (recall that $|\lambda|\in (0,\tilde\lambda_0]$). Also, by Lemma~\ref{lem:le_appr} and Corollary~\ref{cor:he-nondeg}, the corresponding determinants $\det (\hat G(\lambda))$ and $\det (\hat Q^{(0)}(\lambda))$ are bounded away from zero as $\lambda\to0$. Let $\lambda_m$ be the points where $V(\lambda)$ is discontinuous. Then, $\Vert V(\lambda_m+0)-V(\lambda_m-0)\Vert=\BigO{\varkappa_m^{-1}}$ as $\lambda_m\to0$. Since $SO(n)$ is connected, we can find a small $\varepsilon_m>0$ neighborhood around each $\lambda_m$ and a smooth path $V_{final}(\lambda)\in SO(n)$ that connects $V(\lambda_m-\varepsilon_m)$ with $V(\lambda_m+\varepsilon_m)$ without deviating from them far. In other words, \begin{equation}\label{Vfinal}\begin{split}
&V_{final}(\lambda)={\bf 1},\ |\lambda|\in[\tilde \lambda_0+\varepsilon_0,1];\\
&V_{final}(\lambda)=V(\lambda),\ |\lambda|\in(0,\tilde \lambda_0-\varepsilon_0],\, \lambda\not\in\cup_m(\lambda_m-\varepsilon_m,\lambda_m+\varepsilon_m);\\ &V_{final}\in C^\infty\left((-1,1)\setminus\{0\}\right);\ \Vert V_{final}(\lambda)-V(\lambda)\Vert=\BigO{\varkappa^{-1}},\lambda\to0. \end{split} \end{equation} It is now clear that the kernel $\vec Q_{final}(x;\lambda)=V_{final}(\lambda)\vec Q^{(0)}(x;\lambda)$ has all the required properties. Our argument proves statement (4) of Theorem~\ref{thm:main}.
Our argument establishes that the operator with the kernel $|\lambda|^{-1/2}\vec G(x;\lambda)$ to leading order as $\lambda\to0$ diagonalizes $\mathscr{K}$. Similarly, Corollary~\ref{cor:main} and equation \eqref{Eac h f} imply that approximate diagonalization of $A^\dagger A$ and $AA^\dagger$ is achieved by the operators with the kernels $|\lambda|^{-1/2}(\hat{f}_1(x;\lambda),\dots,\hat{f}_n(x;\lambda))$ and $|\lambda|^{-1/2}(\hat{h}_1(x;\lambda),\dots,\hat{h}_n(x;\lambda))$, respectively.
\subsection{Degree of ill-posedness of inverting $\mathscr{K}$} The discussion in this section allows us to estimate the degree of ill-posedness of inverting $\mathscr{K}$. Consider the equation $\mathscr{K} f=\phi$, where $f,\phi\in L^2(U)$, and $\phi$ is in the range of $\mathscr{K}$. Pick any $x_0\in \mathring{J}\cup \mathring{E}$ and suppose $\text{sing\,supp}(f)=\{x_0\}$. Our goal is to estimate how unstable it is to reconstruct the singularity of $f$ at $x_0$. Thus, we can assume without loss of generality that $\text{supp}(f)\subset \mathring{J}\cup \mathring{E}$. To simplify notation, the kernel $\vec Q_{final}(x;\lambda)$ is denoted $\vec Q(x;\lambda)$ in this subsection.
Pick any $\delta\in(0,1)$. By \eqref{resol}, converting the equation to the spectral domain, i.e. applying the operator $T_{ac}$ of \eqref{isometry}, and then using the first line in \eqref{diag-two-comps} gives \begin{equation}\label{solution}
\lambda \tilde f_j(\lambda)=\tilde \phi_j(\lambda),\ \tilde f_j(\lambda)=\lambda^{-1}\tilde \phi_j(\lambda),\ 1\leq j\leq n,\ |\lambda|\in (0,1], \end{equation} where \begin{equation}\label{fourier_tr}\begin{split}
\tilde f_j(\lambda)=&\int_U Q_j(x;\lambda)f(x)dx,\ \tilde \phi_j(\lambda)=\int_U Q_j(x;\lambda)\phi(x)dx,\ 1\leq j\leq n,\ |\lambda|\in (0,1],\\ f(x)=&\sum_{j=1}^n \int_{-\delta}^{\delta} \lambda^{-1}Q_j(x;\lambda)\tilde \phi_j(\lambda)d\lambda + f_{sm}(x),\ x\in \mathring{J}\cup \mathring{E}. \end{split} \end{equation}
The term $f_{sm}$ can be written in the form \begin{equation}\label{smooth part}\begin{split}
f_{sm}(x)=&\sum_{j=1}^n \int_{|\lambda|\in [\delta,1]} Q_j(x;\lambda)\tilde f_j(\lambda)d\lambda + f_p(x),\ x\in \mathring{J}\cup \mathring{E}, \end{split}
\end{equation} where $f_p$ is the projection of $f$ onto $H_p$. Since $\text{supp}(f)\subset \mathring{J}\cup \mathring{E}$, \eqref{E phi 3 II} implies $\max_{j,|\lambda|\in [\delta,1]}|\tilde f_j(\lambda)|<\infty$. By assertion (1) of Theorem~\ref{thm:main}, $f_p\in C^\infty(\mathring{J}\cup \mathring{E})$. Therefore, $f_{sm}\in C^\infty(\mathring{J}\cup \mathring{E})$ by \eqref{E phi 3 II}. Clearly, the operator that reconstructs $f_{sm}$ from $\phi$ is stable (i.e., bounded between the $L^2$ spaces). Hence the instability of reconstruction comes from a neighborhood of $\lambda=0$.
By Proposition~\ref{Eprime simple}, \eqref{appr-fns}, \eqref{solution}, and \eqref{fourier_tr}, a leading order reconstruction of $f$ from $\phi$ becomes \begin{equation}\label{lead_order}\begin{split}
f(x)\sim & \sum_{j=1}^n \int_{-\delta}^{\delta} \lambda^{-1}|\lambda|^{-1/2}A_j^\pm(x)\cos((-\ln|\lambda/2|) {\mathfrak g}_{\text{im}}(x)+c_j)\tilde \phi_j(\lambda)d\lambda. \end{split} \end{equation}
The sign `$\sim$' here
means that the difference between the left- and right-hand sides is smoother than $f$ whenever $f$ is of a limited smoothness.
Next, we change variables $\lambda\to\varkappa=-\ln|\lambda/2|$ in \eqref{lead_order}. To preserve the $L^2$-norm, we need to multiply $\tilde \phi_j$ by $|\lambda|^{1/2}$. Setting $\tilde \Phi_j^\pm(\varkappa):=|\lambda|^{1/2}\tilde \phi_j(\pm|\lambda|)$, we have \begin{equation}\label{two norms} \int_{\mathbb R}\tilde \phi_j^2(\lambda)d\lambda=\int_0^\infty \left(\tilde \Phi_j^+(\varkappa)\right)^2d\varkappa+\int_0^\infty \left(\tilde \Phi_j^-(\varkappa)\right)^2d\varkappa, \end{equation} and \eqref{lead_order} becomes \begin{equation}\label{lead_order_v2}\begin{split}
&f(x)\sim \frac12\sum_{j=1}^n \int^{\infty}_{-\ln(\delta/2)} e^{\varkappa}\cos(\varkappa {\mathfrak g}_{\text{im}}(x)+c_j)\left[A_j^+(x)\tilde \Phi_j^+(\varkappa)-A_j^-(x)\tilde \Phi_j^-(\varkappa)\right]d\varkappa. \end{split} \end{equation}
Finally, we linearize ${\mathfrak g}_{\text{im}}(x)$ near $x=x_0$: ${\mathfrak g}_{\text{im}}(x)\sim {\mathfrak g}_{\text{im}}(x_0)+{\mathfrak g}_{\text{im}}'(x_0)\Delta x$. This implies that to reconstruct $f$ to leading order near $x_0$ involves computing integrals of the form \begin{equation}\begin{split}
\int^{\infty}_{*} e^{\varkappa}{\begin{Bmatrix}\cos\\ \sin
\end{Bmatrix}}({\mathfrak g}_{\text{im}}'(x_0)\Delta x\varkappa)\tilde G_k(\varkappa)d\varkappa, \end{split}
\end{equation} where $\tilde G_k(\varkappa)$ are obtained from $\tilde \Phi_k^\pm(\varkappa)$ via linear combinations with bounded coefficients depending on {$\varkappa$ and} \blue{$x_0$}. Consequently, the degree of ill-posedness is $\exp\left(\varkappa/|{\mathfrak g}_{\text{im}}'(x_0)|\right)$. This means that \begin{enumerate}
\item the inversion of $\mathscr{K}$ is exponentially unstable,
\item the degree of instability of inverting $\mathscr{K}$ is spatially variant,
\item the smaller the value of $|{\mathfrak g}_{\text{im}}'(x_0)|$, the more unstable the reconstruction of $f$ at this point. \end{enumerate}
The same argument allows us to estimate the degree of ill-posedness of inverting the operators $AA^\dagger$ and $A^\dagger A$, and we obtain qualitatively similar conclusions. {Indeed, by Corollary~\ref{cor:main}, the operators that diagonalize the absolutely continuous parts of $AA^\dagger$, $A^\dagger A$, are obtained by the appropriate truncation of $T_{ac}$. So the above argument carries over in an essentially the same way. The differences are that we have to divide by $\lambda^2$ rather than by $\lambda$, and the spectral interval is $[0,1]$. Hence the degree of ill-posedness becomes $\exp\left(2\varkappa/|{\mathfrak g}_{\text{im}}'(x_0)|\right)$.}
\appendix
\section{Hypergeometric Parametrix}
In \cite{BBKT19}, the authors solve RHP \ref{RHPGamma} when $E=[-a,0], J=[0,a]$, where $a>0$, explicitly in terms of hypergeometric functions for $\lambda\in\mathbb{C}\setminus[-1,1]$. For $\lambda\in[-1,1]$, the solution of RHP \ref{RHPGamma} is not unique but two important solutions can be obtained via continuation in $\lambda$ from $\mathbb{C}\setminus[-1,1]$ onto $[-1,1]$ from the upper or lower half plane. We call this explicit solution $\Gamma_4(z;\lambda)$ and refer the reader to \cite[eq. (17)]{BBKT19} for the exact expression. Moreover, the small-$\lambda$ asymptotics of $\Gamma_4(z;\lambda)$ was obtained in various regions of the complex $z$-plane. We briefly summarize these results, as they will be key components in the parametrices at the double points $b_k$, $k=1,\dots,n$, see \eqref{ParamIeLeft}, \eqref{ParamIeRight}.
\subsection{Model solution for the two interval problem with no gap}\label{sect-modsol2} The model RHP for the two symmetrical intervals problem with no gap, considered here, has the jump matrix $-i\sigma_1$ on the interval $(-a,0)$ and the jump matrix $i\sigma_1$ on the interval $(0,a)$. Because of the discontinuity of the jump matrix at $z=0$, we expect solutions to have $\BigO{z^{-\frac{1}{2}}}$ behavior at $z=0$.
Since the jump matrices $\pm i\sigma_1$ at any $z\in(-a,0)\cup(0,a)$ commute with each other, a solution to this RHP can be written as \begin{equation}\label{RHP_4_model0} \tilde\Phi(z)=\left(\frac{(z-a)(z+a)}{z^2}\right)^{\frac{\sigma_1}{4}}. \end{equation} However, because of the $\BigO{z^{-\frac{1}{2}}}$ behavior near $z=0$, this solution of the model RHP is not unique. In fact, the most general solution has the form \begin{equation}\label{RHP_4_model1} \Phi(z)=\left({\bf 1}+ \frac Az\right)\left(\frac{(z-a)(z+a)}{z^2}\right)^{\frac{\sigma_1}{4}}, \end{equation} where \begin{equation}\label{A-mod_4} A= \left[ \begin{array}{cc} x & -x\\ y & -y \end{array} \right] \end{equation} with $x,y\in{\mathbb C}$ being arbitrary constants. It is shown in Theorem \ref{GammaAsmptotics} that we should take $x=y=\frac{i}{2}$ when $\Im[\lambda]\geq0$ and $x=y=-\frac{i}{2}$ when $\Im[\lambda]\leq0$, so the model solution $\Psi_4(z;\varkappa)$ that will be using is \begin{equation}\label{Psi_4} \Psi_4(z;\varkappa)=\begin{cases} \left({\bf 1}+ \frac {iaB}{2z}\right)\left(\frac{(z-a)(z+a)}{z^2}\right)^{\frac{\sigma_1}{4}}, & \text{ for } \Im\varkappa\leq0, \\ \sigma_1\left({\bf 1}+ \frac {iaB}{2z}\right)\left(\frac{(z-a)(z+a)}{z^2}\right)^{\frac{\sigma_1}{4}}\sigma_1, &\text{ for } \Im\varkappa\geq0, \end{cases} \end{equation} where $B=\sigma_3-i\sigma_2$ and equality in $\geq,\leq$ is to be understood as continuation from the upper, lower half plane, respectively.
One can also obtain the model solution \eqref{Psi_4} following the method, described in Section \ref{ssect-sol_mod_gen}. According to Lemma \ref{lem-mod-sol}, a solution to the model problem can be written as \begin{equation}\label{RHP_4_model2} \Phi_4(z;\varkappa)=e^{\tilde{\mathfrak g}_4(\infty;\varkappa)\sigma_3}\left(\frac{z+a}{z-a}\right)^{\frac{\sigma_1}{4}}e^{-\tilde{\mathfrak g}_4(z;\varkappa)\sigma_3} \end{equation} where $\tilde{\mathfrak g}_4(z;\varkappa)$ solves the scalar RHP \begin{align*}
\tilde{\mathfrak g}_4(z_+;\varkappa) + \tilde{\mathfrak g}_4(z_-;\varkappa)&=0, & & z\in(-a,0), \\
\tilde{\mathfrak g}_4(z_+;\varkappa) + \tilde{\mathfrak g}_4(z_-;\varkappa)&=i\pi\cdot\text{sgn}(\Im\varkappa), & & z\in(0,a). \end{align*} Then \begin{equation}\label{tg4} \tilde{\mathfrak g}_4(z;\varkappa)=\frac{R_2(z)}{2\pi i}\int_0^a\frac{ i\pi \cdot\text{sgn}(\Im\varkappa)}{(\zeta-z)R_{2+}(\zeta)}d\zeta, \end{equation} where $R_2:\mathbb{C}\setminus[-a,a]\to\mathbb{C}$ is defined as $R_2(z)=\sqrt{z^2-a^2}$, satisfies $R_{2+}(z)=-R_{2-}(z)$ for $z\in(-a,a)$, and $R_2(z)=z+\BigO{1}$ as $z\to\infty$.
\begin{proposition}\label{prop-Psi_4} Solutions \eqref{Psi_4} and \eqref{RHP_4_model2} to the model RHP coincide. \end{proposition}
\begin{proof} Let us begin with $\Im\varkappa\leq0$. Any solution to the model problem has form \eqref{RHP_4_model1} with matrix $A$ given by \eqref{A-mod_4}.
So, to prove the Proposition, it is sufficient to show that residues at $z=\infty$ of solutions \eqref{RHP_4_model2} and \eqref{Psi_4} coincide. It is easy to see that the residue of \eqref{Psi_4} is
\begin{equation}\label{res_Psi_4}
\frac {ia}{2}(\sigma_3-i\sigma_2).
\end{equation}
According to \eqref{tg4},
\begin{equation}\label{ass_g_mod4}
\tilde{\mathfrak g}_4(z;\varkappa)=-\frac{i\pi}{4}-\frac{ia}{2z} +\BigO{z^{-2}}
\end{equation}
as $z\rightarrow\infty$.
Substituting \eqref{ass_g_mod4} and
\begin{equation}
\left(\frac{z+a}{z-a}\right)^{\frac{\sigma_1}{4}}=\left(1+\frac{2a}{z-a}\right)^{\frac{\sigma_1}{4}}={\bf 1}+\frac{a\sigma_1}{2z}+\BigO{z^{-2}},
\end{equation}
into \eqref{RHP_4_model2}, we obtain
\begin{equation}\label{RHP_4_model2_ass} \Phi_4(z)=i^{-\frac{\sigma_3}2}\left( {\bf 1}+\frac{a\sigma_1}{2z}+\BigO{z^{-2}} \right)i^{\frac{\sigma_3}2} e^{\frac{ia\sigma_3}{2z}}(1+\BigO{z^{-2}}) ={\bf 1}+\frac{ia}{2z}(\sigma_3-i\sigma_2)+\BigO{z^{-2}}, \end{equation}
which, together with \eqref{res_Psi_4}, proves the Proposition for $\Im\varkappa<0$. Now for $\Im\varkappa\geq0$, we use the fact that $\tilde{\mathfrak g}_4(z;\varkappa)$ is odd in $\varkappa$ and that $\left(\frac{z+a}{z-a}\right)^{\frac{\sigma_1}{4}}$ commutes with $\sigma_1$ to obtain
\begin{align*} \Phi_4(z;\varkappa)=\sigma_1\Psi_4(z;-\varkappa)\sigma_1=\Psi_4(z;\varkappa), \end{align*} as desired. \end{proof}
\subsection{Approximation of $\Gamma_4(z;\lambda)$ on an annulus centered at the double point $z=0$}
\begin{figure}
\caption{\hspace{.05in}The set $\Omega$ and lenses.}
\label{figLenseOmega}
\end{figure}
We are now ready to state the needed result from \cite[Theorem 15]{BBKT19} and refer the reader to that text for a more refined description of the set $\Omega$, as shown in Figure \ref{figLenseOmega}. We recall that the 4 point ${\mathfrak g}$-function is given by \begin{equation}\label{g4} {\mathfrak g}_4(z)=\frac{1}{i\pi}\ln\left(\frac{a+iR_2(z)}{z}\right)-\frac{1}{2}. \end{equation}
\begin{theorem}\label{GammaAsmptotics}
Let $\theta\in(0,\pi/2)$ be fixed. Then, as $\lambda=e^{-\varkappa}\to0$ with $|\Im\varkappa|\leq\pi$, \begin{equation}\label{appr_Gam_4} \Gamma_4(z;\lambda) =
\begin{cases}
\Psi_4(z;\varkappa)\left({\bf 1}+\BigO{\varkappa^{-1}}\right)e^{-\varkappa{\mathfrak g}_4(z)\sigma_3}, &z\in\Omega\setminus(\mathcal{L}_1^{(\pm)}\cup\mathcal{L}_2^{(\pm)}), \\
\Psi_4(z;\varkappa)\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\begin{bmatrix} 1 & 0 \\ \pm ie^{\varkappa(2{\mathfrak g}_4(z)-1)} & 1 \end{bmatrix}e^{-\varkappa{\mathfrak g}_4(z)\sigma_3}, &z\in\Omega\cap\mathcal{L}_1^{(\pm)}, \\
\Psi_4(z;\varkappa)\left({\bf 1}+\BigO{\varkappa^{-1}}\right)\begin{bmatrix} 1 & \mp ie^{-\varkappa(2{\mathfrak g}_4(z)+1)} \\ 0 & 1 \end{bmatrix}e^{-\varkappa{\mathfrak g}_4(z)\sigma_3}, &z\in\Omega\cap\mathcal{L}_2^{(\pm)},
\end{cases} \nonumber \end{equation} uniformly for $z\in\Omega$. See Figure \ref{figLenseOmega} for $\theta, \Omega, \mathcal{L}_j^{(\pm)}$. \end{theorem}
\section{Positive definiteness of the matrix $\mathbb{M}$}\label{sec: appendix pos def}
The objective of this Appendix is to prove Proposition \ref{prop: M pos def}, which claims that the matrix $\mathbb{M}$, defined in \eqref{M-def}, is positive definite. We begin with some preparatory observations and Lemmas. First, recall that $\mathbb{M}$ is the $n\times n$ block diagonal matrix with blocks $\mathbb{M}_1, \mathbb{M}_2$ of size $\tilde{N}\times\tilde{N}, N\times N$, respectively. Using \eqref{M1 diag}-\eqref{offdiag-ent} and the identity $b_j-b_l=(a_2-a_1)\sin(\nu_l+\nu_j)\sin(\nu_l-\nu_k)$, we can see that \begin{align}\label{M1M2 fact}
\mathbb{M}_1=\tilde{K}\tilde{D}+\frac{2\tilde{K}\tilde{C}\tilde{K}}{a_2-a_1}, \hspace{1cm} \mathbb{M}_2=KD+\frac{2KCK}{a_2-a_1}, \end{align} where $\tilde{K}=\text{diag}[\{k_{2j-1}\}_{j=1}^{\tilde{N}}]$, $K=\text{diag}[\{k_{2j}\}_{j=1}^N]$, $\tilde{C}, C$ are the cosecant matrices \begin{align}\label{cosec mat}
\tilde{C}&=[\csc(\nu_{2j-1}+\nu_{2k-1})]_{j,k=1}^{\tilde{N}}, \hspace{1cm} C=[\csc(\nu_{2j}+\nu_{2k})]_{j,k=1}^{N}, \end{align} and \begin{align}
\tilde{D}&=\text{diag}\bigg[\bigg\{\cos(\alpha-\nu_{2m-1})-\sum_{j=1}^{\tilde{N}}\frac{2k_{2j-1}}{(a_2-a_1)\sin(\nu_{2j-1}+\nu_{2m-1})}\bigg\}_{m=1}^{\tilde{N}}\bigg], \label{tilde D} \\
D&=\text{diag}\bigg[\bigg\{\sin(\alpha+\nu_{2m})-\sum_{j=1}^{N}\frac{2k_{2j}}{(a_2-a_1)\sin(\nu_{2j}+\nu_{2m})}\bigg\}_{m=1}^{N}\bigg]. \label{D} \end{align}
\begin{lemma}\label{lemma: tot pos} The kernel $\hat{C}(\theta,\phi):= \csc(\theta+\phi)$ is totally positive on $(0,\frac{\pi}{2})$. \end{lemma} \begin{proof} Let $\xi_j:= {\rm e}^{2i\theta_j}$. We want to prove that \begin{equation} \det \bigg[\hat{C}(\theta_i, \theta_j)\bigg]_{i,j=1}^N>0 \nonumber \end{equation} for all $N\in {\mathbb N}$ and $\theta_j\in (0,\frac{\pi}{2})$. We have \begin{eqnarray} \hat{C}(\theta,\phi) = \frac {2i}{{\rm e}^{i(\theta+\phi)}-{\rm e}^{-i(\theta+\phi)}} = {\rm e}^{i\phi-i\theta}\frac {2i}{{\rm e}^{2i\phi}-{\rm e}^{-2i\theta}}, \nonumber \end{eqnarray} and thus \begin{align*}
\det \bigg[\hat{C}(\theta_i, \theta_j)\bigg]_{i,j=1}^N =\det \bigg[\xi_i^{\frac 12} \ov\xi_j^{\frac 1 2}\frac {2i}{\xi_i-\overline\xi_j}\bigg]_{i,j=1}^N =(2i)^N\det \bigg[\frac {1}{\xi_i-\overline\xi_j}\bigg]_{i,j=1}^N. \end{align*} Using the Cauchy determinant formula, we obtain \begin{align*}
(2i)^N \frac {\prod_{i<j} (\xi_i-\xi_j)(\ov\xi_j-\ov\xi_i)}{\prod_{i,j=1}^N (\xi_i-\overline\xi_j)}=\frac {(2i)^N\prod_{i<j} |\xi_i-\xi_j|^2}{\prod_{i=1}^N (\xi_i-\overline\xi_i) \prod_{i<j} |\ov\xi_i - \xi_j|^2}=\frac{\prod_{i<j}|\xi_i-\xi_j|^2}
{\prod_{j=1}^N\sin(2\theta_j) \prod_{i<j} |\overline\xi_i-\xi_j|^2}. \end{align*} Therefore \begin{align*}
\det \bigg[\hat{C}(\theta_i, \theta_j)\bigg]_{i,j=1}^N=\frac{\prod_{i<j}|\xi_i-\xi_j|^2}
{\prod_{j=1}^N\sin(2\theta_j) \prod_{i<j} |\overline\xi_i-\xi_j|^2}>0 \end{align*} because $\sin(2\theta_j)>0$ for any $\theta_j\in(0,\frac{\pi}{2})$. Since this statement is true for any diagonal minor, the matrix $\left[\hat{C}(\theta_i, \theta_j)\right]_{i,j=1}^N$ is positive definite. \end{proof}
\begin{lemma}\label{lemma: diag pos} The diagonal elements of the matrices $\tilde{D}, D$ are positive for any $n\in\mathbb{N}$. \end{lemma}
\begin{proof} Define $\xi_j=e^{2i\nu_{2j}}$, $\eta_j=e^{2i\nu_{2j-1}}$, polynomials \begin{align*}
Q(z)=\prod_{j=1}^{\tilde{N}}(z-\eta_j), \hspace{1cm} P(z)=\prod_{j=1}^N(z-\xi_j), \end{align*} and rational functions \begin{align*}
F(z)=\frac{ie^{-i\alpha}(z-1)^{\frac{1+(-1)^n}{2}}Q(z)}{2\xi_m^\frac{1}{2}z(z-\overline{\xi}_m)P(z)}, \hspace{1cm} \tilde{F}(z)=\frac{e^{i\alpha}(z+1)(z-1)^\frac{1-(-1)^n}{2}P(z)}{2\eta_m^\frac{1}{2}z(z-\overline{\eta}_m)Q(z)}. \end{align*} Inspecting \eqref{tilde D}, \eqref{D} and applying the identity $\sin(z)=\frac{1}{2i}(e^{iz}-e^{-iz})$, we find that \begin{align*}
\frac{2k_{2j-1}}{(a_2-a_1)\sin(\nu_{2j-1}+\nu_{2m-1})}&=\frac{e^{i\alpha}(\eta_j+1)(\eta_j-1)^{\frac{1-(-1)^n}{2}}P(\eta_j)}{2\eta_j\eta_m^\frac{1}{2}(\eta_j-\overline{\eta}_m)Q'(\eta_j)}=\text{Res}\tilde{F}(z)\big|_{z=\eta_j}, \\
\frac{2k_{2j}}{(a_2-a_1)\sin(\nu_{2j}+\nu_{2m})}&=\frac{ie^{-i\alpha}(\xi_j-1)^{\frac{1+(-1)^n}{2}}Q(\xi_j)}{2\xi_j\xi_m^\frac{1}{2}(\xi_j-\overline{\xi}_m)P'(\xi_j)}=\text{Res}F(z)\big|_{z=\xi_j}. \end{align*} By contour deformation, we have identities \begin{align*}
\sum_{j=1}^{\tilde{N}}\frac{2k_{2j-1}}{(a_2-a_1)\sin(\nu_{2j-1}+\nu_{2m-1})}&=\sum_{j=1}^{\tilde{N}}\text{Res}\tilde{F}(z)\big|_{z=\eta_j}=\text{Res}\tilde{F}(z)\big|_{z=\infty}-\text{Res}\tilde{F}(z)\big|_{z=0}-\text{Res}\tilde{F}(z)\big|_{z=\overline{\eta}_m}, \\
\sum_{j=1}^N\frac{2k_{2j}}{(a_2-a_1)\sin(\nu_{2j}+\nu_{2m})}&=\sum_{j=1}^N\text{Res}F(z)\big|_{z=\xi_j}=\text{Res}F(z)\big|_{z=\infty}-\text{Res}F(z)\big|_{z=0}-\text{Res}F(z)\big|_{z=\overline{\xi}_m}. \end{align*} It can now be verified that \begin{align*}
\text{Res}\tilde{F}(z)\big|_{z=\infty}-\text{Res}\tilde{F}(z)\big|_{z=0}&=\cos(\alpha-\nu_{2m-1}), \\
\text{Res}F(z)\big|_{z=\infty}-\text{Res}F(z)\big|_{z=0}&=\sin(\alpha+\nu_{2m}), \end{align*} and \begin{align*}
\text{Res}\tilde{F}(z)\big|_{z=\overline{\eta}_m}&=\cos(\nu_{2m-1})(\sin\nu_{2m-1})^\frac{1-(-1)^n}{2}\frac{\prod_{s=1}^N\sin(\nu_{2m-1}+\nu_{2s})}{\prod_{s=1}^{\tilde{N}}\sin(\nu_{2m-1}+\nu_{2s-1})}, \\
\text{Res}F(z)\big|_{z=\overline{\xi}_m}&=(\sin\nu_{2m})^\frac{1+(-1)^n}{2}\frac{\prod_{s=1}^{\tilde{N}}\sin(\nu_{2m}+\nu_{2s-1}) }{\prod_{s=1}^N \sin(\nu_{2m}+\nu_{2s})}. \end{align*} Letting $\tilde{D}_m, D_m$ denote the $m$-th diagonal entries of the matrices $\tilde{D}, D$, respectively, we have shown \begin{align}
\tilde{D}_m&=\cos(\nu_{2m-1})(\sin\nu_{2m-1})^\frac{1-(-1)^n}{2}\frac{\prod_{s=1}^N\sin(\nu_{2m-1}+\nu_{2s})}{\prod_{s=1}^{\tilde{N}}\sin(\nu_{2m-1}+\nu_{2s-1})}>0, \label{tDm} \\ D_m&=(\sin\nu_{2m})^\frac{1+(-1)^n}{2}\frac{\prod_{s=1}^{\tilde{N}}\sin(\nu_{2m}+\nu_{2s-1}) }{\prod_{s=1}^N \sin(\nu_{2m}+\nu_{2s})}>0, \label{Dm} \end{align} since $\nu_j\in(0,\frac{\pi}{2})$ for $j=1,\ldots,n$, $\cos(x)>0$ for $x\in(0,\frac{\pi}{2})$, and $\sin(x)>0$ for $x\in(0,\pi)$, completing the proof.
\end{proof}
\begin{proposition}\label{prop: M pos def} The matrix $\mathbb{M}$, defined in \eqref{M-def}, is positive definite for any $n\in\mathbb{N}$. \end{proposition}
\begin{proof} Lemmas \ref{lemma: tot pos}, \ref{lemma: diag pos} have shown that $\tilde{C}, C, \tilde{D}, D$ are positive definite. It is clear from \eqref{k_j} that $k_j>0$ for $j=1,\ldots,n$ and thus $\tilde{K}\tilde{D}, KD, \frac{2\tilde{K}\tilde{C}\tilde{K}}{a_2-a_1}, \frac{2KCK}{a_2-a_1}$ are also positive definite. So by \eqref{M1M2 fact}, we have that $\mathbb{M}_1, \mathbb{M}_2$ are positive definite because the sum of positive definite matrices is positive definite. Finally, we conclude that $\mathbb{M}$ is positive definite because it is a block diagonal matrix with positive definite blocks. \end{proof}
\end{document} | arXiv |
\begin{definition}[Definition:Product Category]
Let $\mathbf C$ and $\mathbf D$ be metacategories.
The '''product category''' $\mathbf C \times \mathbf D$ is the category with:
{{DefineCategory
| ob = $\tuple {X, Y}$, for all $X \in \operatorname {ob} \mathbf C$, $Y \in \operatorname {ob} \mathbf D$
| mor = $\tuple {f, g}: \tuple {X, Y} \to \tuple {X', Y'}$ for all $f: X \to X'$ in $\mathbf C_1$ and $g: Y \to Y'$ in $\mathbf D_1$
| comp = $\tuple {f, g} \circ \tuple {h, k} := \tuple {f \circ h, g \circ k}$, whenever this is defined
| id = $\operatorname {id}_{\tuple {X, Y} } := \tuple {\operatorname {id}_X, \operatorname {id}_Y}$
}}
\end{definition} | ProofWiki |
Statistical stopping criteria for automated screening in systematic reviews
Max W Callaghan1,2,3 &
Finn Müller-Hansen1,3
Systematic Reviews volume 9, Article number: 273 (2020) Cite this article
Active learning for systematic review screening promises to reduce the human effort required to identify relevant documents for a systematic review. Machines and humans work together, with humans providing training data, and the machine optimising the documents that the humans screen. This enables the identification of all relevant documents after viewing only a fraction of the total documents. However, current approaches lack robust stopping criteria, so that reviewers do not know when they have seen all or a certain proportion of relevant documents. This means that such systems are hard to implement in live reviews. This paper introduces a workflow with flexible statistical stopping criteria, which offer real work reductions on the basis of rejecting a hypothesis of having missed a given recall target with a given level of confidence. The stopping criteria are shown on test datasets to achieve a reliable level of recall, while still providing work reductions of on average 17%. Other methods proposed previously are shown to provide inconsistent recall and work reductions across datasets.
Evidence synthesis technology is a rapidly emerging field that promises to change the practice of evidence synthesis work [1]. Interventions have been proposed at various points in order to reduce the human effort required to produce systematic reviews and other forms of evidence synthesis. A major strand of the literature works on screening: the identification of relevant documents in a set of documents whose relevance is uncertain [2]. This is a time-consuming and repetitive task, and in a research environment with constrained resources and increasing amounts of literature, this may limit the scope of the evidence synthesis projects undertaken. Several papers have developed active learning (AL) approaches [3–7] to reduce the time required to screen documents. This paper sets out how current approaches are unreliable in practice, and outlines and evaluates modifications that would make AL systems ready for live reviews.
Active learning is an iterative process where documents screened by humans are used to train a machine learning model to predict the relevance of unseen papers [8]. The algorithm chooses which studies will next be screened by humans, often those which are likely to be relevant or about which the model is uncertain, in order to generate more labels to feed back to the machine. By prioritising those studies most likely to be relevant, a human reviewer most often identifies all relevant studies—or a given proportion of relevant studies (described by recall: the number of relevant studies identified divided by the total number of relevant studies)—before having seen all the documents in the corpus. The proportion of documents not yet seen by the human when they reach the given recall threshold is referred to as the work saved. This represents the proportion of documents that they do not have to screen, which they would have had to without machine learning.
Machine learning applications are often evaluated using sets of documents from already completed systematic reviews for which inclusion or exclusion labels already exist. As all human labels are known a priori, it is possible to simulate the screening process, recording when a given recall target has been achieved. In live review settings, however, recall remains unknown until all documents have been screened. In order for work to really be saved, reviewers have to stop screening while uncertain about recall. This is particularly problematic in systematic reviews because low recall increases the risk of bias [9]. The lack of appropriate stopping criteria has therefore been identified as a research gap [10, 11], although some approaches have been suggested. These have most commonly fallen into the following categories:
Sampling criteria: Reviewers estimate the number of relevant documents by taking a random sample at the start of the process. They stop when this number, or a given proportion of it, has been reached [12].
Heuristics: Reviewers stop when a given number of irrelevant articles are seen in a row [6, 7].
Pragmatic criteria: Reviewers stop when they run out of time [3].
Novel automatic stopping criteria: Recent papers have proposed more complicated novel systems for automatically deciding when to stop screening [13–15].
We review the first three classes of these methods in the following section and discuss their theoretical limitations. They are then tested on several previous systematic review datasets. We demonstrate theoretically and with our experimental results that these three classes of methods cannot deliver consistent levels of work savings or recall—particularly across different domains, or datasets with different properties [2]. We also discuss the limitations of novel automatic stopping criteria, which have all demonstrated promising results, but do not achieve a given level of recall in a reliable or reportable way. Without the reliable or reportable achievement of a desired level of recall, deployment of AL systems in live reviews remains challenging.
This study proposes a system for estimating the recall based on random sampling of remaining documents. We use a simple statistical method to iteratively test a null hypothesis that the recall achieved is less than a given target recall. If the hypothesis can be rejected, we conclude that the recall target has been achieved with a given confidence level and screening can be stopped. This allows AL users to predefine a target in terms of uncertainty and recall, so that they can make transparent, easily communicable statements like "We reject the null hypothesis that we achieve a recall of less than 95% with a significance level of 5%".
In the remainder of the paper, we first discuss in detail the shortcomings of existing stopping criteria. Then, we introduce our new criteria based on a hypergeometric test. We evaluate our stopping criteria and compare their performance with heuristic- and sampling-based criteria on real-world systematic review datasets on which AL systems have previously been tested [13, 16–18].
Methods review
We start by explaining the sampling- and heuristic-based stopping criteria and discussing their methodological limitations.
Sampling-based stopping criteria
The stopping criterion suggested by Shemilt et al. [12] involves establishing the Baseline Inclusion Rate (BIR), by taking a random sample at the beginning of screening. The BIR is used to estimate the number of relevant documents in the whole dataset. Reviewers continue to screen until this number, or a proportion of it corresponding to the desired level of recall, is reached.
However, the estimation of the BIR fails to correctly take into account sampling uncertainty.Footnote 1 This uncertainty is crucial, as errors can have severe consequences. Let us assume that users will stop screening when they have identified 95% of the relevant number of documents. If the estimated number of relevant documents is more than the true number of relevant documents divided by 0.95, then the users will never see 95% of the estimated number. This means that they will keep screening until they have seen all documents, and no work savings will be achieved. Conversely, if the number of relevant documents is underestimated by even a single unit, then the recall achieved will be lower than the target.
The number of relevant documents drawn without replacement from a finite sample of documents follows the hypergeometric distribution. Figure 1a shows the distribution of the predicted number of documents after drawing 1000 documents from a total of 20,000 documents, where 500 documents (2.5%) are relevant. The left shaded portion of the graph shows all the cases where the recall will be less than 95%. This occurs 48% of the time. The right shaded portion of the graph shows the cases where the number of relevant documents is overestimated so much that no work savings could be made to achieve a target recall of 95%. This occurs 29% of the time. In only 23% of cases can work savings be achieved while still achieving a recall of at least 95%.
Distribution of under- or overestimation errors using the BIR sampling method in a dataset of 20,000 documents of which 500 are relevant. a The probability distribution of the estimated number of relevant documents after a sample of 1000 documents. b The probability of each type of error according to the sample size
Figure 1b shows the probability distribution of these errors according to the sample size. Even with very large samples, both types of error remain frequent. This shows how baseline estimation inevitably offers poor reliability, either in terms of recall or in work saved.
Heuristic stopping criteria
Some studies give the example of heuristic stopping criteria based on drawing a given number of irrelevant articles in a row [6, 7]. We take this as a proxy for estimating that the proportion of remaining documents that are relevant in the unseen documents is low, as the probability of observing 0 relevant documents in a given sample (analogous to a set of consecutive irrelevant results) is a decreasing function of the number of relevant documents in the population. We find this a promising intuition, but argue that (1) it ignores uncertainty, as discussed in relation to the previous method; (2) it lacks a formal description that would help to find a suitable threshold for the criterion; and (3) it misunderstands the significance of a low proportion of relevant documents in estimating the recall.
Figure 2 illustrates this third point. We show two scenarios with identical low proportions of relevant documents observed in the unseen documents. In the top part of the figure, machine learning (ML) has performed well, and 74% of the screened documents were relevant. In the bottom part of the figure, ML has performed less well, and only 26% of the screened documents were relevant. In both cases, only 2% of unseen documents are relevant, but 2% of a larger number means more relevant documents are missed. Recall is not simply a function of the proportion of unseen documents that are relevant, but also of the number of unseen documents. This also means that where ML has performed well (as in the top figure), a low proportion of relevant documents in those not yet checked is indicative of lower recall than where ML has performed less well. Likewise, where the proportion of relevant documents in the whole corpus is low, a similarly low proportion of relevant documents is likely to be observed, even when true recall is low. This shows us that even a perfect estimator of the proportion of unseen documents that are relevant is insufficient on its own to provide sufficient information about when to stop screening. To estimate recall reliably, it is necessary to take into account the total number of unseen relevant documents (or their proportion times the number of unseen documents).
Similar low proportions of relevant documents in unseen documents with different consequences for recall. The top bar shows a random distribution of relevant documents (green) and irrelevant documents (red) at a given proportion of relevance. The bottom bar shows distributions of relevant and irrelevant documents in hypothetical sets of seen (right) and unseen (left—transparent) documents
Pragmatic stopping criteria
Wallace et al. [4] develop a "simple, operational stopping criterion": stopping after half the documents have been screened. Although the criterion worked in their experiment, it is unclear how this could be generalised, and its development depended on knowledge of the true relevance values. Jonnalagadda and Petitti [6] note that "the reviewer can elect to end the process of classifying documents at any point, recognizing that stopping before reviewing all documents involves a trade-off of lower recall for reduced workload", although clearly the reviewer lacks information about probable recall.
Novel automatic stopping criteria
Two examples come from the information retrieval literature. Di Nunzio [14] presents a novel automatic stopping criterion based on BM25, although recall reported is "often between 0.92 and 0.94 and consistently over 0.7". Yu and Menzies [13] also present a stopping criterion based on BM25 which allows the user to target a specific level of recall. However, reviewers are not given the opportunity to specify a confidence level, and for two of the four datasets in which they tested their criteria, the median achieved recall at a stopping criteria targeting 95% recall was below 95%. In each case, the reliability of the estimate is dependent on the performance of the model.
Finally, Howard et al. [15] present a method to estimate recall based on the number of irrelevant documents D observed in a list of documents since the δth previous relevant document. They reason that this should follow the negative binomial distribution based on the proportion of remaining relevant documents p, and use this information to estimate \(\hat {p}\), and with this, the total number of relevant articles and the estimated recall.
However, their method does not quantify uncertainty, but can only claim that the method "tends to result in a conservative estimate of recall" (emphasis ours). This is not guaranteed by the criterion itself but rather a finding of the simulation with example datasets. Further, the authors do not give sufficient information to reproduce their results, providing neither code (they describe their own proprietary software) nor an equation for \(\hat {p}\). Additionally, the criterion requires a tuning parameter δ, which users may have insufficient information to set optimally. Lastly, because screening is a form of sampling without replacement, the negative hypergeometric distribution should be preferred to the negative binomial, even though the latter can be a good approximation for cases with large numbers of documents.
These last examples are promising developments, but they all fail to take into account the needs of live systematic reviews, where the reliability of and ease of communication about recall are paramount, and the results must be independent of model performance. In the following, we explain our own method, which provides clearly communicable estimates of recall, and which manages uncertainty in a way robust to model performance.
A statistical stopping criterion for active learning
In our screening setup, we start off with Ntot documents that are potentially relevant. ρtot of these documents are actually relevant, but we do not know this value a priori. As we screen relevant documents, we include them, so ρseen represents the number of relevant documents screened, and recall τ is given by:
$$ \tau = \frac{\rho_{seen}}{\rho_{tot}} $$
We set a target recall τtar and a confidence level α. We want to keep screening until τ≥τtar, and devise a hypothesis test to estimate whether this is the case with a given level of confidence (Fig. 3). We do this based on interrupting the active-learning process and drawing a random sample from the remaining unseen documents. We first describe this test, before showing how a variation on the test can be used to decide when to begin drawing a random sample.
A workflow for active learning in screening with a statistical stopping criterion
At the start of the sample, NAL is the number of documents seen during the active learning process, and N is the number of documents remaining, so that:
$$ N = N_{tot} - N_{AL} $$
We refer to the number of relevant documents seen during active learning as ρAL, and the number of remaining relevant documents as K. We do not know the value of K but know that it is given by the total number of relevant documents minus the number of relevant documents seen during active learning.
$$ K = \rho_{tot} - \rho_{AL} $$
We now take random draws from the remaining N documents, and denote the number of documents drawn with n and the number of relevant documents drawn with k. The number of relevant documents seen is updated by adding the number of relevant documents seen since sampling began to the number of relevant documents seen during active learning.
$$ \rho_{seen} = \rho_{AL} + k $$
We proceed to form a null hypothesis that the true value of recall is less than our target recall:
$$ H_{0} : \tau < \tau_{tar} $$
Accordingly, the alternative hypothesis is that recall is equal to or greater than our target:
$$ H_{1} : \tau \geq \tau_{tar} $$
Because we are sampling without replacement, we can use the hypergeometric distribution to find out the probability of observing k relevant documents in a sample of n documents from a population of N documents of which K are relevant. We know that k is distributed hypergeometrically:
$$ k \sim \text{Hypergeometric}(N, K, n) $$
We introduce a hypothetical value for K, which we call Ktar. This represents the minimum number of relevant documents remaining at the start of sampling compatible with our null hypothesis that recall is below our target.
$$ K_{tar} = \lfloor \frac{\rho_{seen}}{\tau_{tar}}-\rho_{AL}+1 \rfloor $$
This equation is derived by combining Eqs. 1 and 4. Because k can only take integer values, Ktar is the smallest integer that satisfies the inequality in Eq. 5. With Ktar, we can reformulate our null hypothesis: the true number of relevant documents in the sample is greater than or equal to our hypothetical value.
$$ H_{0} : K \geq K_{tar} $$
We test this by calculating the probability of observing k or fewer relevant documents from the hypergeometric distribution given by Ktar, using the cumulative probability mass function.
$$ p = P(X \leq k), \text{where}\ X \sim \text{Hypergeometric}(N,K_{tar},n) $$
Because the cumulative probability mass function P(X≤k) is decreasing with increasing K, this gives the maximum probability of observing k for all values of K compatible with our null hypothesis. Similar arguments have been made to derive confidence intervals for estimating the parameter K in the hypergeometric distribution function [19, 20], and the derivation of an equivalent criterion could use the upper limit of such a confidence interval of an estimated K from the observation of k.
We can reject our null hypothesis and stop screening if the maximum probability of obtaining our observed results given our null hypothesis p is below 1−α.Footnote 2 To further investigate the accuracy of the test, we perform an experiment drawing 1 million random samples in 6 scenarios with different characteristics. We vary the value of ρAL to simulate starting random sampling with different levels of recall achieved.
Figure 4 shows that in each case, as long as recall is lower than the target recall when sampling begins, the percentage of trials in which the criterion is trigerred to early is within two tenths of a percentage point of 5% and the 5th percentile of achieved recall values is within two tenths of a percentage point of the target recall 95%.
The distribution of achieved recall values given our random sampling stopping criterion for 6 scenarios with different recall values at the start of sampling
Ranked quasi-sampling
We now proceed to describe a special case of the method described above which we (1) use as a heuristic in order to decide when to begin random sampling and (2) test as an independent stopping criterion. The method works by treating batches of previously screened documents as if they were random samples.
We calculate p as above for subsets of the already screened documents. Concretely, we use subsets of documents Ai by looking back to the last i documents, \(\phantom {\dot {i}\!}A_{i} = \{d_{N_{seen} - 1},..., d_{N_{seen} - i}\}\), where the documents d are indexed in the order in which they have been screened. For a specific i, this corresponds to random sampling beginning after seeing i documents in the section above. Thus, we set NAL to i, n to Nseen−i,ρAL to the number of relevant documents seen when i documents had been seen, and k to the number of relevant documents seen since i documents had been seen, and calculate p according to Eq. 10. We compute p for all sets Ai with \(i \in {N_{seen}-1 \dots 1}\). This gives us a vector p, representing the values of p which would have been estimated at each point at which we could have stopped active learning and began random sampling. The point at which the p value for our null hypothesis is lowest is given by pmin. With the vectorised implementation included in our accompanying code, these calculations are completed in less than the time it would take a human to code the next document.
First, we use this method as a useful heuristic for deciding when to stop active learning, and switch to random sampling. For this, we choose a higher threshold for the likelihood, \(p_{min} < 1-\frac {\alpha }{2}\). Second, we use the same ranked quasi-sampling as an independent stopping criterion, by continuing screening with active learning until pmin<1−α. We present the results of this second procedure separately below.
Given that the documents seen during active learning are ranked according to predicted relevance, they do not in fact represent a random sample. This means that the test is unlikely to be accurate. It would be reasonable to assume that the proportion of relevant documents in each ranked quasi-sample is as high if not higher than the proportion of relevant documents in the unseen documents. This assumption would make this estimator conservative. As such, it works in a similar way to the criterion proposed by Howard et al. [15], although it makes use of more information and provides hypothesis testing rather than just a point estimate of recall.
We evaluate each of the criteria discussed on real-world test data, operationalising the heuristic stopping criteria with 50, 100, and 200 consecutive irrelevant records. We run 100 iterations on each dataset and record the following measures:
Actual Recall: The recall when the stopping criteria were met.
WS-SC: Work saved when the stopping criteria were met.
Additional Burden: The work saved when the criterion was triggered subtracted from the work saved when the recall target was actually achieved.
For simplicity, we use a basic SVM model [21, 22], with 1–2 word n-grams taken from the document abstracts used as input data. We start with random samples of 200 documents (we do not employ Shemilt et al.'s methods for identifying the "optimal" sample size, as we showed these in the "Methods review" section to be unhelpful). Subsequently, we "screen", that is, we reveal the labels of, batches of the 20 documents with the highest predicted relevance scores, retraining the model after each batch. Theoretically, using smaller batch sizes could mean that the recall target is achieved more quickly, but this is a trade-off between computational time spent training and the speed at which the algorithm can "learn". However, this is a modelling choice which may affect work saved, but not recall. Each criterion is evaluated after each document is "screened". For our criteria, we set the target recall value to 95% and the confidence level to 95%.
The systematic review datasets used for testing are described in Table 1. We use the seminal collection of systematic reviews used to develop machine learning applications for document screening by Cohen and co-authors in 2006 [16], along with the widely used Proton Beam [17] and COPD [18] datasets, and computer science datasets used to test FASTREAD [13]. Testing on datasets with different properties and from different domains is key to establishing criteria appropriate for general use. Choosing as broad as possible data also prevents us from being able to "tune" our machine learning approach in ways that may work well for specific datasets but not generalise well. Work savings, even maximum work savings, are therefore below the state of the art recorded for each of these datasets. In this way, we can show how well the criteria perform even when the model performs badly.
Table 1 Dataset properties
All computational steps required to reproduce this analysis are documented online at https://github.com/mcallaghan/rapid-screening.
Figure 5 shows the actual recall and work savings achieved when each stopping criterion has been satisfied. For comparison, we also include the results that would have been achieved with a priori knowledge of the data, that is, the work saved when the 95% recall target was actually reached. In a live systematic review, reviewers would never know when this had been reached, but these are the work savings most often reported in machine learning for systematic review screening studies.
Distribution of recall and work saved after each stopping criteria. Green dots show results for datasets with less than 1000 documents, orange dots show datasets with 1000–2000 documents, and blue dots show datasets with more than 2000 documents
Both the random sampling and the ranked sampling criteria achieve the target threshold of 95% in more than 95% of cases. That this is greater than 95% is accounted for by the fact that random sampling sometimes begins after the target recall has been achieved, in which case the null hypothesis would be a priori impossible. The ranked quasi-sampling criterion outperforms the random sampling criterion with respect to both recall and work savings, saving a mean of 17% of the work compared to 15%, and missing the target in only 0.95% compared to 3.29% of cases. In theory, the ranked sampling criterion is conservative if the assumption holds that documents chosen by machine learning are not less likely to be relevant than those chosen at random. Based on our experiments, this assumption seems reasonable and accounts for the higher recall. Because the ranked quasi-sampling criterion can flexibly choose its sample, whereas the random criterion has to wait for a random sample to be triggered, the criterion is also triggered earlier, as it can make use of more data. This accounts for the higher work savings.
The baseline sampling criteron (Fig. 5c) misses the 95% recall target in 39.67% of cases, while the most common work saving is 0%. This is in line with our expectations that, due to random sampling error, the expected number of documents will often be overestimated or underestimated, resulting in zero work savings or poor recall.
The heuristic stopping criteria, both for 50 consecutive irrelevant results (Fig. 5d—IH50) and for 200 irrelevant results (Fig. 5e), also perform unreliably. Although the mean work saved for IH50 is 41%, the target is missed in 39% of cases. The cases below the horizontal grey line indicate instances where work has been saved at the expense of achieving the recall target.
In Fig. 6, we rescale the x axis, calling it additional burden, which is simply the work saved when the criterion is triggered minus the work saved when the recall target was actually achieved. This measure indicates whether the stopping criterion was triggered too early (negative values) or too late (positive values). The figure directly highlights the trade-offs involved in deciding when to stop screening: For our criteria, there is mostly a small additional burden which comes with the necessity to make sure the desired recall target has been reached and reject the null hypothesis that this has not been the case. For the other criteria, there are many cases in which additional burden is negative, i.e. the criterion has been triggered too early. In these cases, however, the desired recall is missed.
Distribution of recall and additional burden after each stopping criterion. Additional burden is the work saved when the criterion was triggered minus the work saved when the target was reached. Colouring of data points as in Fig. 5
To help explain the different work savings that were observed in our experiments, we show the distribution of work savings from our ranked quasi-sampling criterion for each dataset in Fig. 7. In general, higher work savings are possible when the total number of documents is larger. However, in datasets with a low proportion of relevant documents, many documents need to be screened to achieve a high confidence that there are only few relevant documents remaining in the unseen ones. Therefore, smaller work savings are possible.
Work saved for the ranked quasi-sampling method in each dataset. Labels show the number of relevant documents and the total number of documents. The datasets are presented in order of the number of documents. The whiskers represent the 5th and 95th percentiles. The grey line shows work savings of 5%
Figure 8 shows the recall and the p value for the null hypothesis for the iteration where the recall target is reached first for four datasets. Although the 95% recall target is achieved very quickly in the Radjenovic dataset, the null hypothesis cannot be excluded until much later. This is because the dataset has only 47 relevant documents out of a population of 5999. After the 95% recall target was achieved, 45 out of 47 relevant documents had been seen and 5029 documents remained. The null hypothesis was therefore that 3 or more of these 5029 documents were relevant, which requires a lot of evidence to disprove. The burden of proof was smaller in the case of the Proton Beam dataset: at the point that the 95% recall threshold was reached, the null hypothesis to disprove was that a minimum of 13 out of 3369 remaining documents were relevant.
The path of recall (yellow) and the p value of H0 for four different datasets
The Statins and Triptans datasets show how the criterion performs when the machine learning model has performed poorly in predicting relevant results. In each case, 95% recall is achieved with close to 20% of documents remaining. With fewer documents remaining, it takes fewer screening decisions to rule out the possibility that the number of relevant documents left is incompatible with the achievement of the recall target.
Our results show that it is possible to use machine learning to achieve a given level of recall with a given level of confidence. The trade-off for achieving recall reliably is that the work saving achieved is less than the maximum possible work saving. However, for large datasets with a significant proportion of relevant documents, the additional effort required to satisfy the criterion will be small compared to the work saved by using machine learning. This makes the approach well suited to broad topics with lots of literature. In other words, it is precisely where machine learning will be most useful that the additional effort will be small.
Different use cases for machine learning enhanced screening may also carry different requirements for recall, or different tolerances for uncertainty. These can be flexibly accommodated within our stopping criterion. Importantly, the ability to make statements about the authors' confidence in achieving a given recall target makes it possible to clearly communicate the implications of using machine learning enhanced screening to readers and reviewers who are not machine learning specialists. This is extremely important in live systematic reviews.
Our criteria have the further advantage that they are independent of the choice or performance of the machine learning model. If a model performs badly at discerning relevant from irrelevant results, the only consequence will be that the work saved will be low. With other criteria, this may result in poor recall. When using machine learning for screening, poor recall can result in biassed results, while low work savings represent no loss to the reviewer as compared to not using machine learning.
One caveat in the derivation of our criteria is that we did not address the problem of multiple testing formally. Such a derivation is mathematically challenging and beyond the scope of this paper. However, the performance of the criteria shows that this is of limited practical concern. Formally describing screening procedures with iterative testing should be a next step towards even more rigourous stopping criteria and should be fully worked out in future research.
So far, systematic review standards have no way of accommodating screening with machine learning. We hope that the reliability and clarity of reporting offered by our stopping criteria make them suitable for incorporation into standards, so that machine learning for systematic review screening can fulfil its promise of reducing workload and making more ambitious reviews tractable.
This paper demonstrates the drawbacks of existing stopping criteria for machine learning approaches to document screening, particularly with regard to reliability. We propose a simple method that delivers reliable recall, independent of machine learning approach or model performance. Our statistical stopping criteria allow users to easily communicate the implications of their use of machine learning, making machine learning enhanced screening ready for live reviews.
Although Shemilt et al. [12] employ a method to choose a sample size based on uncertainty, they fail to acknowledge the potential implications for recall of their choice. Their margin of error of 0.0025 and observed proportion of relevant studies of 0.0005 translate to estimates of 400±451 relevant results. To reduce the margin of error to ±5% of estimated relevant studies, they would have had to screen 638,323 out of 804,919 results. See the notebook https://github.com/mcallaghan/rapid-screening/blob/master/analysis/bir_theory.ipynbthat accompanies this paper for a detailed discussion.
The notebook, https://github.com/mcallaghan/rapid-screening/blob/master/analysis/hyper_criteria_theory.ipynb, in the github repository accompanying this paper contains a step by step explanation of this method with code and examples.
Westgate M, Haddaway N, Cheng S, McIntosh E, Marshall C, Lindenmayer D. Software support for environmental evidence synthesis. Nat Ecol Evol. 2018; 2:588–90. https://doi.org/10.1038/s41559-018-0502-x.
O'Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015; 4(1):1–22. https://doi.org/10.1186/2046-4053-4-5.
Miwa M, Thomas J, O'Mara-Eves A, Ananiadou S. Reducing systematic review workload through certainty-based screening. J Biomed Inform. 2014; 51:242–53. https://doi.org/10.1016/j.jbi.2014.06.005.
Wallace B, Trikalinos T, Lau J, Brodley C, Schmid C. Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinforma. 2010; 11. https://doi.org/10.1186/1471-2105-11-55.
Wallace BC, Small K, Brodley CE, Trikalinos TA. Active learning for biomedical citation screening. In: Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '10). New York: Association for Computing Machinery: 2010. p. 173–82. https://doi.org/10.1145/1835804.1835829.
Jonnalagadda S, Petitti D. A new iterative method to reduce workload in systematic review process. Int J Comput Biol Drug Des. 2013; 6((1/2)):5. https://doi.org/10.1504/ijcbdd.2013.052198.
Przybyła P, Brockmeier A, Kontonatsios G, Le Pogam M, McNaught J, von Elm E, Nolan K, Ananiadou S. Prioritising references for systematic reviews with RobotAnalyst: a user study. Res Synth Methods. 2018; 9(3):470–88. https://doi.org/10.1002/jrsm.1311.
Settles B. Active learning literature survey technical report. University of Wisonsin-Madison. 2009. https://doi.org/10.1016/j.matlet.2010.11.072.
Lefebvre C, Glanville J, Briscoe S, Littlewood A, Marshall C, Metzendorf M-I, Noel-Storr A, Rader T, Shokraneh F, Thomas J, Wieland LS. Chapter 4: Searching for and selecting studies In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020). Cochrane: 2020. www.training.cochrane.org/handbook.
Bannach-Brown A, Przybyła P, Thomas J, Rice A, Ananiadou S, Liao J, Macleod M. Machine learning algorithms for systematic review: reducing workload in a preclinical review of animal studies and reducing human screening error. Syst Rev. 2019; 8(1):1–12. https://doi.org/10.1186/s13643-019-0942-7.
Marshall I, Wallace B. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev. 2019; 8(1):1–10. https://doi.org/10.1186/s13643-019-1074-9.
Shemilt I, Simon A, Hollands G, Marteau T, Ogilvie D, O'Mara-Eves A, Kelly M, Thomas J. Pinpointing needles in giant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews. Res Synt Methods. 2014; 5(1):31–49. https://doi.org/10.1002/jrsm.1093.
Yu Z, Menzies T. FAST 2 : an intelligent assistant for finding relevant papers. Expert Syst Appl. 2019; 120:57–71. https://doi.org/10.1016/j.eswa.2018.11.021.
Di Nunzio GM. A Study of an Automatic Stopping Strategy for Technologically Assisted Medical Reviews In: Pasi G, Piwowarski B, Azzopardi L, Hanbury A, editors. Advances in Information Retrieval. ECIR 2018. Lecture Notes in Computer Science, vol 10772. Cham: Springer: 2018. https://doi.org/10.1007/978-3-319-76941-7_61.
Howard BE, Phillips J, Tandon A, Maharana A, Elmore R, Mav D, Sedykh A, Thayer K, Merrick BA, Walker V, Rooney A, Shah RR. SWIFT-Active Screener: Accelerated document screening through active learning and integrated recall estimation. Environ Int. 2020; 138:105623. https://doi.org/10.1016/j.envint.2020.105623.
Cohen AM, Hersh WR, Peterson K, Yen P-Y. Reducing workload in systematic review preparation using automated citation classification. J Am Med Inform Assoc. 2006; 13(2):206–19. https://doi.org/10.1197/jamia.M1929.
Terasawa T, Dvorak T, Ip S, Raman G, Lau J, Trikalinos TA. Systematic review: charged-particle radiation therapy for cancer. Ann Intern Med. 2009; 151(8):556–65. https://doi.org/10.7326/0003-4819-151-8-200910200-00145.
Castaldi P, Cho M, Cohn M, Langerman F, Moran S, Tarragona N, Moukhachen H, Venugopal R, Hasimja D, Kao E, Wallace B, Hersh C, Bagade S, Bertram L, Silverman E, Trikalinos T. The COPD genetic association compendium: a comprehensive online database of COPD genetic associations. Hum Mol Genet. 2009; 19(3):526–34. doi:10.1093/hmg/ddp519.
Buonaccorsi J. A note on confidence intervals for proportions in finite populations. Am Stat. 1987; 41(3):215–8. doi:10.2307/2685108.
Sahai H, Khurshid A. A note on confidence intervals for the hypergeometric parameter in analyzing biomedical data. Comput Biol Med. 1995; 25(1):35–8. doi:10.1016/0010-4825(95)98883-f.
Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995; 20:273–97. https://doi.org/10.1007/BF00994018.
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É. Scikit-learn: Machine Learning in Python. 2011.
Max Callaghan is supported by a PhD scholarship from the Heinrich Böll Foundation. Finn Müller-Hansen acknowledges funding from the German Federal Ministry of Research and Education within the Strategic Scenario Analysis (START) project (grant reference: 03EK3046B).
Mercator Research Institute on Global Commons and Climate Change, EUREF Campus 19, Torgauer Straße 12-15, Berlin, 10829, Germany
Max W Callaghan & Finn Müller-Hansen
Priestley International Centre for Climate, University of Leeds, Leeds, LS2 9JT, UK
Max W Callaghan
Potsdam Institute for Climate Impact Research (PIK), Member of the Leibniz Association, P.O. Box 60 12 03, Potsdam, 14412, Germany
Finn Müller-Hansen
MC designed the research and conducted the experiments. FMH contributed to the development of the statistical basis for the stopping criterion. Both authors wrote and edited the manuscript. The authors read and approved the final manuscript.
Correspondence to Max W Callaghan.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Callaghan, M.W., Müller-Hansen, F. Statistical stopping criteria for automated screening in systematic reviews. Syst Rev 9, 273 (2020). https://doi.org/10.1186/s13643-020-01521-4
Stopping criteria | CommonCrawl |
lmccune
Dustin Payne
Sean Callen
Joel Henningsen
Taylor Fluharty
Katelyn Gutteridge
KerryWittrock
BingxinSong
SiyuGuo
Tiffani Hahn
Bryson Wolf
Hunter Lanham
Cody Buehler
Dr Ashley Johnson
Hw4 Problem 8
Return to Homework 4, Homework Problems, Glossary, Theorems
This exercise shows that there are two nonisomorphic group structures on a set of 4 elements.
Let the set be $\{e,a,b,c\}$, with $e$ the identity element for the group operation. A group table would then have to start in the manner shows in Table 4.22 of the book. The square indicated by the question mark cannot be filled in with $a$. It must be filled in either with the identity element $e$ or with an element different from both $e$ and $a$. In this latter case, it is no loss of generality to assume that this element is $b$. If this square is filled in with $e$, the table can then be completed in two ways to give a group. Find these two tables. If this square is is filled in with $b$, then the table can only be completed in one way to give a group. Find this table. Of the three tables you now have, two give isomorphic groups. Determine which two tables these are, and give the one-to-one onto renaming function which is an isomorphism.
(a) Are all groups of $4$ elements commutative?
(b) Which table gives a group isomorphic to the group $U_{4}$, so that we know the binary operation defined by the table is associative?
(c) Show that the group given by one of the other tables is structurally the same as the group in Exercise 14 of the book for one particular value of $n$, so that we know that the operation defined by that table is associative also.
\begin{array} {c||c|c|c|c} * & e & a & b & c \\ \hline e & e & a & b & c \\ \hline a & a & e & c & b \\ \hline b & b & c & e & a \\ \hline c & c & b & a & e \\ \end{array}
\begin{array} {c||c|c|c|c} * & e & a & b & c \\ \hline e & e & a & b & c \\ \hline a & a & e & c & b \\ \hline b & b & c & a & e \\ \hline c & c & b & e & a \\ \end{array}
\begin{array} {c||c|c|c|c} * & e & a & b & c \\ \hline e & e & a & b & c \\ \hline a & a & b & c & e \\ \hline b & b & c & e & a \\ \hline c & c & e & a & b \\ \end{array}
Table 1 is different from 2 and 3 in that each element is its own inverse.
The symmetry in the main diagonal of each table shows that all groups of order $4$ are commutative.
Table 3 gives $U_{4}$ if $e = 1$, $a = i$, $b = -1$, and $c = -i$.
Let $n = 2$. Now, there are four $2 \times 2$ diagonal matrices with entries $\pm1$, which are
\begin{align} E = \begin{bmatrix} 1 & 0 \\[0.3em] 0 & 1 \\[0.3em] \end{bmatrix}, A = \begin{bmatrix} -1 & 0 \\[0.3em] 0 & 1 \\[0.3em] \end{bmatrix}, B = \begin{bmatrix} 1 & 0 \\[0.3em] 0 & -1 \\[0.3em] \end{bmatrix}, C = \begin{bmatrix} -1 & 0 \\[0.3em] 0 & -1 \\[0.3em] \end{bmatrix} \end{align}
These matrices can be used to make up Table 1. | CommonCrawl |
Expectation value of $1/x$
Given a random variable $x$ which is assumed to follow a Gaussian distribution $x \sim N( \mu, \sigma^2 )$ and $x$ is further known to be positive, I am interested in the following expectation value: $E\left[ \frac{1}{x} \right]$ .
In my case $\mu \gg 0$, which might allow to ignore that $x$ is always positive.
Does anyone knows about a collection of known expectation values under the normal distribution?
probability probability-theory probability-distributions normal-distribution
MatthiasMatthias
$\begingroup$ What do you mean by $x$ is further known to positive? Is this a truncated normal distribution? $\endgroup$ – Learner Mar 12 '13 at 10:40
$\begingroup$ In any case $E[1/x]=\infty$ . $\endgroup$ – Learner Mar 12 '13 at 10:42
$\begingroup$ Yes, it does. I chose $\mu = 10$ and $\sigma = 1$. $E\left[ \frac{1}{x} \right]$ than converges to 0.1010 $\endgroup$ – Matthias Mar 12 '13 at 13:30
$\begingroup$ @Did: I was replying to "I doubt that", which was in response to Matthias saying to joriki that the mean converged. The mean of $\frac1x$ is going to be taken over all samples where $\frac1x$ does not overflow. This leaves out a small, symmetric interval around $x=0$. This corresponds to what happens when taking the principal value integral. The smaller we take the symmetric interval, the closer the expected value comes to $0.10103161564918598872=\frac{\sqrt{2\pi}}{2}e^{-50}\,\mathrm{erfi}(5\sqrt2)$. $\endgroup$ – robjohn♦ Mar 13 '13 at 0:35
$\begingroup$ @Did: Matthias does not claim that $\mathrm{E}\left(\frac1{|x|}\right)$ converges, he says that his samples (I assume via simulation) indicate that $\mathrm{E}\left(\frac1{x}\right)$ converges (no absolute values). Now, it did bother me that he said he rejected $x<0$. However, I then considered that $\int_\epsilon^1\frac{\mathrm{d}x}{x}\lt11356$ where $\epsilon=2^{-16382}$, which is the smallest positive extended precision number, and $e^{-50}\lt2\times10^{-22}$. Thus, the extended precision contribution of the singularity would be less than $1\times10^{-18}$. So much for simulations. $\endgroup$ – robjohn♦ Mar 13 '13 at 15:35
We will use $$ \int_{-\infty}^\infty x^{2k}e^{-\frac{x^2}{2}}\,\mathrm{d}x=(2k-1)!!\sqrt\pi\tag{1} $$ If we take the principal value, we get the convergent series $$ \begin{align} &\mathrm{PV}\frac1{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty\frac1xe^{\large-\frac{(x-\mu)^2}{2\sigma^2}}\,\mathrm{d}x\\ &=\frac1{\sqrt{2\pi}\sigma}\int_0^\infty\frac1x\left(e^{\large-\frac{(\mu-x)^2}{2\sigma^2}}-e^{\large-\frac{(\mu+x)^2}{2\sigma^2}}\right)\,\mathrm{d}x\\ &=\frac1{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty\frac1x\sinh\left(\frac{\mu x}{\sigma^2}\right)e^{\large-\frac{\mu^2+x^2}{2\sigma^2}}\,\mathrm{d}x\\ &=\frac1{\sqrt{2\pi}\sigma}e^{\large-\frac{\mu^2}{2\sigma^2}}\int_{-\infty}^\infty\frac1x\sinh\left(\frac\mu\sigma x\right)e^{\large-\frac{x^2}{2}}\,\mathrm{d}x\tag{2}\\ &=e^{\large-\frac{\mu^2}{2\sigma^2}}\sum_{k=0}^\infty\frac{(2k-1)!!}{(2k+1)!}\frac{\mu^{2k+1}}{\sigma^{2k+2}}\tag{3} \end{align} $$ We can also get an asymptotic expansion from $(2)$ using stationary phase: $$ \begin{align} &\frac1{\sqrt{2\pi}\sigma}e^{\large-\frac{\mu^2}{2\sigma^2}}\int_{-\infty}^\infty\frac1x\sinh\left(\frac\mu\sigma x\right)e^{\large-\frac{x^2}{2}}\,\mathrm{d}x\\ &=\mathrm{PV}\frac1{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty\frac12\left(\frac1{\frac\mu\sigma+x}+\frac1{\frac\mu\sigma-x}\right)e^{\large-\frac{x^2}{2}}\,\mathrm{d}x\\ &=\mathrm{PV}\frac1{\sqrt{2\pi}\mu}\int_{-\infty}^\infty\frac12\left(\frac1{1+\frac\sigma\mu x}+\frac1{1-\frac\sigma\mu x}\right)e^{\large-\frac{x^2}{2}}\,\mathrm{d}x\\ &\sim\frac1\mu\sum_{k=0}^\infty(2k-1)!!\frac{\sigma^{2k}}{\mu^{2k}}\tag{4} \end{align} $$ We can get a "closed form" in terms of $\mathrm{erfi}$ from $(2)$ $$ \begin{align} &\frac{\mathrm{d}}{\mathrm{d}\alpha}\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty\frac1x\sinh\left(\alpha x\right)\,e^{\large-\frac{x^2}{2}}\,\mathrm{d}x\\ &=\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty\cosh\left(\alpha x\right)\,e^{\large-\frac{x^2}{2}}\,\mathrm{d}x\\ &=e^{\large\frac{\alpha^2}{2}}\tag{5} \end{align} $$ Therefore, $$ \begin{align} \frac1{\sqrt{2\pi}}\int_{-\infty}^\infty\frac1x\sinh\left(\alpha x\right)\,e^{\large-\frac{x^2}{2}}\,\mathrm{d}x &=\int_0^\alpha e^{\large\frac{t^2}{2}}\,\mathrm{d}t\\ &=\frac{\sqrt2}i\int_0^{i\alpha/\sqrt2} e^{\large-t^2}\,\mathrm{d}t\\ &=\frac{\sqrt{2\pi}}{2}\,\mathrm{erfi}(\alpha/\sqrt2)\tag{6} \end{align} $$ and plugging $(6)$ into $(2)$ yields $$ \mathrm{PV}\frac1{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty\frac1xe^{\large-\frac{(x-\mu)^2}{2\sigma^2}}\,\mathrm{d}x =\frac{\sqrt{2\pi}}{2\sigma}e^{\large-\frac{\mu^2}{2\sigma^2}}\mathrm{erfi}\left(\frac{\mu}{\sigma\sqrt2}\right)\tag{7} $$
Extended precision is not enough
It is mentioned in a comment that $x<0$ was rejected. This poses a theoretical problem. The computations above are carried out in principal value, that means that a small interval $[-\delta,\delta]$ is rejected, where $\delta\to0$. However, if $x\lt\delta$ is rejected, then, as $\delta\to0$, the the contribution to expected value from the singularity grows like $$ -\frac{\log(\delta)}{\sqrt{2\pi}}e^{-50}\tag{8} $$ Even using extended precision, where $\delta=2^{-16382}$, $(8)$ amounts to about $8.74\times10^{-19}$ which is pretty insignificant. However, as $\delta\to0$, $(8)\to\infty$.
Therefore, even extended precision arithmetic is insufficient to expose the problems with a simulation where $x\lt0$ is rejected.
robjohn♦robjohn
Not the answer you're looking for? Browse other questions tagged probability probability-theory probability-distributions normal-distribution or ask your own question.
Difference between logarithm of an expectation value and expectation value of a logarithm
Marginal Distribution
Expectation $\alpha-$stable random variable
Conditional expectation of product of independent RVs
Intuition behind the normal distribution
PDF of Sum of $\chi^2$ and Normal Distributions
Exponential of reciprocal normal distribution | CommonCrawl |
\begin{document}
\title{Statistical Inference for Generalized Additive Partially Linear Model \footnote{This is a post-peer-review, pre-copyedit version of an article published in the Journal of Multivariate Analysis. The final authenticated version is available online at: http://dx.doi.org/10.1016/j.jmva.2017.07.011}}
\author{ Rong \textsc{Liu} \\
Department of Mathematics and Statistics\\ University of Toledo, OH\\ email: \texttt{[email protected]} \\ Wolfgang K. \textsc{H\"{a}rdle} \\ Center for Applied Statistics and Economics\\ Humboldt-Universit\"{a}t zu Berlin, Germany \\ and \\ School of Business \\ Singapore Management University, Singapore\\ email: \texttt{[email protected]} \\ Guoyi \textsc{Zhang} \\ Department of Mathematics and Statistics \\ The University of New Mexico, NM\\ email: \texttt{[email protected]} } \date{} \maketitle
\begin{center} \textbf{Abstract} \end{center}
The Generalized Additive Model (GAM) is a powerful tool and has been well studied. This model class helps to identify additive regression structure. Via available test procedures one may identify the regression structure even sharper if some component functions have parametric form. The Generalized Additive Partially Linear Models (GAPLM) enjoy the simplicity of the GLM and the flexibility of the GAM because they combine both parametric and nonparametric components. We use the hybrid spline-backfitted kernel estimation method, which combines the best features of both spline and kernel methods for making fast, efficient and reliable estimation under $ {\Greekmath 010B} $\textit{-}mixing condition. In addition, simultaneous confidence corridors (SCCs) for testing overall trends and empirical likelihood confidence region for parameters are provided under independent condition. The asymptotic properties are obtained and simulation results support the theoretical properties. For the application, we use the GAPLM to improve the accuracy ratio of the default predictions for $19610$ German companies. The quantlet for this paper are available on https://github.com.
\noindent \textsc{JEL Classification}: {C14 G33}
\noindent \textsc{Keywords}: {B spline; empirical likelihood; default; link function; mixing; kernel estimator}
\section{Introduction}
The class of generalized additive models (GAMs) provides an effective semiparametric regression tool for high dimensional data, see [6]. {For a response $Y$ and a predictor vector $\mathbf{X}=\left( X_{1},\ldots,X_{d}\right) ^{\top }$, the pdf of $Y_{i}$ conditional on $ \mathbf{X}_{i}$ with respect to a fixed ${\Greekmath 011B} $-finite measure from exponential families is \begin{equation*} f\left( Y_{i}\left\vert \mathbf{X}_{i},{\Greekmath 011E} \right. \right) =\exp \left[ \left\{ Y_{i}m\left( \mathbf{X}_{i}\right) -b\left\{ m\left( \mathbf{X} _{i}\right) \right\} \right\} /a\left( {\Greekmath 011E} \right) +h\left( Y_{i},{\Greekmath 011E} \right) \right] . \end{equation*} The function$\ b$ is a given function which relates $m\left( \mathbf{x} \right) $ to the conditional variance function ${\Greekmath 011B} ^{2}\left( \mathbf{x}
\right) =\mathrm{var} \left( Y|\mathbf{X=x}\right) $ via the equation $ {\Greekmath 011B} ^{2}\left( \mathbf{x}\right) =a\left( {\Greekmath 011E} \right) b^{\prime \prime }\left\{ m\left( \mathbf{x}\right) \right\} $, in which $a\left( {\Greekmath 011E} \right) $ is a nuisance parameter that quantifies overdispersion. For the theoretical development, it is not necessary to assume that the data $ \left\{ Y_{i},\mathbf{X}_{i}^{\top }\right\} _{i=1}^{n}$ come from such an exponential family, but only that the conditional variance and conditional mean are linked by the following equation \begin{equation*}
\mathrm{var} \left( Y|\mathbf{X=x}\right) =a\left( {\Greekmath 011E} \right) b^{\prime
\prime }\left[ \left( b^{\prime }\right) ^{-1}\left\{ {\mbox{\rm E}}\left( Y|\mathbf{X=x} \right) \right\} \right] . \end{equation*} More specifically, the model is \begin{equation}
{\mbox{\rm E}}\left( Y|\mathbf{X}\right) =b^{\prime }\left\{ c+\mathop{\textstyle \sum }\nolimits_{{\Greekmath 010B} =1}^{d}m_{{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) \right\} , \label{DEF:GAM} \end{equation} with $b^{\prime }$ is the derivative of function $b$. } Model (\ref{DEF:GAM} ) can for example be used in scoring methods and analyzing default of companies (Here $Y=1$ denotes default and $b^{\prime }=e^{y}/1+e^{y}$ is the link function ). Fitting Model (\ref{DEF:GAM}) to such a default data set leads to $d$ estimated component functions $\hat{m}_{{\Greekmath 010B} }\left( \cdot \right) $ was studied in [11, 25]. Plotting these $\hat{m}_{{\Greekmath 010B} }\left( \cdot \right) $ with simultaneous confidence corridors (SCCs) as developed by [25], one can check the functional form and therefore obtain simpler parameterizations of $m_{{\Greekmath 010B} }$.
The typical approach is to perform a preliminary (nonparametric) analysis on the influence of the component functions, and one may improve the model by introducing parametric components. This will lead to simplification, more interpretability and higher precision in statistical calibration. With these thoughts in mind, the GAM model changes to a Generalized Additive Partially Linear Model (GAPLM): \begin{equation}
{\mbox{\rm E}}\left( Y|\mathbf{T,X}\right) =b^{\prime }\left\{ m\left( \mathbf{T}, \mathbf{X}\right) \right\} , \label{DEF:GPLM} \end{equation} with $m\left( \mathbf{T},\mathbf{X}\right) =\mathbf{{\Greekmath 010C} }^{\top }\mathbf{T+ }\sum_{{\Greekmath 010B} =1}^{d_{2}}m_{{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) $ and \begin{equation*} \mathbf{{\Greekmath 010C} }=\left( {\Greekmath 010C} _{0},{\Greekmath 010C} _{1},\ldots,{\Greekmath 010C} _{d_{1}}\right) ^{\top }\mathbf{,T}=\left( T_{0},T_{1},\ldots,T_{d_{1}}\right) ^{\top }, \mathbf{X}=\left( X_{1},\ldots,X_{d_{2}}\right) ^{\top }, \end{equation*} where $T_{0}=1$, $T_{k}\mathbf{\in }\mathbb{R}$ for $1\leq k\leq d_{1}$. In this paper, we have following equation \begin{equation*}
\mathrm{var} \left( Y|\mathbf{T=t,X=x}\right) =a\left( {\Greekmath 011E} \right)
b^{\prime \prime }\left[ \left( b^{\prime }\right) ^{-1}\left\{ {\mbox{\rm E}}\left( Y| \mathbf{T=t,X=x}\right) \right\} \right] . \end{equation*} We can write (\ref{DEF:GPLM}) in the usual regression form: \begin{equation*} Y_{i}=b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} +{\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) {\Greekmath 0122} _{i} \end{equation*}
with white noise ${\Greekmath 0122} _{i}$ that satisfies ${\mbox{\rm E}}\left( {\Greekmath 0122} _{i}|\mathbf{T}_{i},\mathbf{X}_{i}\right) =0,$ ${\mbox{\rm E}}\left( {\Greekmath 0122} _{i}^{2}|\mathbf{T}_{i},\mathbf{X}_{i}\right) =1$. For identifiability, \begin{equation} {\mbox{\rm E}}\left\{ m_{{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) \right\} =0,1\leq {\Greekmath 010B} \leq d_{2}. \label{constrain} \end{equation} As in most works on nonparametric smoothing, estimation of the functions $ \left\{ m_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \right\} _{{\Greekmath 010B} =1}^{d_{2}}$ is conducted on compact sets. Without lose of generality, let the compact set be $\mathbf{\varkappa }=\left[ 0,1\right] ^{d_{2}}$.
Some estimation methods for Model (\ref{DEF:GPLM}) have been proposed, but are either computationally expensive or lacking theoretical justification. The kernel-based backfitting and marginal integration methods e.g., in [3, 9, 24], are computationally expensive. In the meanwhile, more advanced non- and semiparametric models (without link function) have been studied, such as partially linear model and varying-coefficient model, see [10, 12, 16, 21, 22]. [21] proposed a nonconcave penalized quasi-likelihood method, with polynomial spline smoothing for estimation of $m_{{\Greekmath 010B} },1\leq {\Greekmath 010B} \leq d_{2}$, and deriving quasi-likelihood based estimators for the linear parameter $\mathbf{{\Greekmath 010C} \in }\mathbb{R}^{1+d_{1}}$. To our knowledge, [21] is a pilot paper since it provides asymptotic normality of the estimators for the parametric components in GAPLM with independent observations. However, asymptotic normality for estimations of the nonparametric component functions $m_{{\Greekmath 010B} },1\leq {\Greekmath 010B} \leq d_{2}$ and SCCs are still missing. Recently, [13] studied more complicated Generalized Additive Coefficient Model by using two-step spline method, but independent and identical assumptions are required for the asymptotic properties of the estimation and inference of $m_{{\Greekmath 010B} }$, and the asymptotic normality of parameter estimations is also missing. [5]{\ developed nonparametric analysis of deviance tools, which can be used to test the significance of the nonparametric term in generalized partially linear models with univariate nonparametric component function. [8] provided empirical likelihood based confidence region for parameter} $\mathbf{{\Greekmath 010C} }$ and pointwise confidence interval for nonparametric term in generalized partially linear models.
The spline backfitted kernel (SBK) estimation introduced in [20] combines the advantages of both kernel and spline methods and the result is balanced in terms of theory, computation, and interpretation. The basic idea is to pre-smooth the component functions by spline estimation and then use the kernel method to improve the accuracy of the estimation on a specific $ m_{{\Greekmath 010B} }$. {In this paper we extend the SBK method to calibrate Model ( \ref{DEF:GPLM}) with additive nonparametric components, as a result we obtain oracle efficiency and asymptotic normality of the estimators for both the parametric and nonparametric components under ${\Greekmath 010B} $\textit{-}mixing condition, which complicates the proof of the theoretical properties. With stronger i.i.d assumption, we provide empirical likelihood (EL) based confidence region for parameter} $\mathbf{{\Greekmath 010C} }$ due to the advantages of EL such as increase of accuracy of coverage, easy implementation, avoiding estimating variances and studentising automatically{, see [8]. In addition we provide SCCs for the nonparametric component functions based on maximal deviation distribution in [2] so one can test the hypothesis of the shape for nonparametric terms. }
The paper is organized as follows. In Section 2, we discuss the details of ( \ref{DEF:GPLM}). In Section 3, the oracle estimator and their asymptotic properties are introduced. In Section 4, the SBK estimator is introduced and the asymptotics for both the parametric and nonparametric component estimations are given. In addition, SCCs for testing overall trends and entire shapes are considered. In Section 5, we apply the methods to simulated and real data examples. All technical proofs are given in the Appendix.
\section{Model assumptions\label{sec:assumptions}}
The space of ${\Greekmath 010B} $-centered square integrable functions on $[0,1]$ is defined as in [18],
\begin{equation*} \mathcal{H}_{{\Greekmath 010B} }^{0}=\left\{ g:{\mbox{\rm E}}\left\{ g\left( X_{{\Greekmath 010B} }\right) \right\} =0,{\mbox{\rm E}}\left\{ g^{2}\left( X_{{\Greekmath 010B} }\right) \right\} <+\infty \right\} . \end{equation*} Next define the model space $\mathcal{M}$, a collection of functions on $ \mathbb{R}^{d_{2}}$ as \begin{equation*} \mathcal{M}=\left\{ g\left( \mathbf{x}\right) =\mathop{\textstyle \sum }\nolimits_{{\Greekmath 010B} =1}^{d_{2}}g_{{\Greekmath 010B} }\left( \mathbf{x}\right) ;g_{{\Greekmath 010B} }\in \mathcal{H} _{{\Greekmath 010B} }^{0}\right\} . \end{equation*} The constraints that ${\mbox{\rm E}}\left\{ g_{{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) \right\} =0$, $1\leq {\Greekmath 010B} \leq d_{2}$ ensure the unique additive representation of $m_{{\Greekmath 010B} }$ as expressed in (\ref{constrain}). Denote the empirical expectation by ${\mbox{\rm E}}_{n}$, then ${\mbox{\rm E}}_{n}{\Greekmath 0127} =\sum_{i=1}^{n}{\Greekmath 0127} \left( \mathbf{X}_{i}\right) /n$. For functions $ g_{1},g_{2}\in \mathcal{M}$, the theoretical and empirical inner products are defined respectively as $\left\langle g_{1},g_{2}\right\rangle ={\mbox{\rm E}} \left\{ g_{1}\left( \mathbf{X}\right) g_{2}\left( \mathbf{X}\right) \right\} $, $\left\langle g_{1},g_{2}\right\rangle _{n}={\mbox{\rm E}}_{n}\left\{ g_{1}\left( \mathbf{X}\right) g_{2}\left( \mathbf{X}\right) \right\} $. The corresponding induced norms are $\left\Vert g_{1}\right\Vert _{2}^{2}={\mbox{\rm E}} g_{1}^{2}\left( \mathbf{X}\right) $, $\left\Vert g_{1}\right\Vert _{2,n}^{2}= {\mbox{\rm E}}_{n}g_{1}^{2}\left( \mathbf{X}\right) $. More generally, we define $ \left\Vert g\right\Vert _{r}^{r}={\mbox{\rm E}}\left\vert g\left( \mathbf{X}\right) \right\vert ^{r}.$
In the paper, for any compact interval $[a,b]$, we denote the space of $p$
-th order smooth functions as $C^{\left( p\right) }[a,b]=\left\{ g|g^{\left( p\right) }\in C\left[ a,b\right] \right\} $, and the class of Lipschitz continuous functions for constant $C>0$ as $\limfunc{Lip}\left( \left[ a,b
\right] ,C\right) =\left\{ g|\left\vert g\left( x\right) -g\left( x^{\prime }\right) \right\vert \leq C\left\vert x-x^{\prime }\right\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{, } \forall x,x^{\prime }\in \left[ a,b\right] \right\} $. For any vector $ \mathbf{x}=\left( x_{1},x_{2},\cdots ,x_{d}\right) ^{\top}$, we denote the supremum and $p$ norms as $\left\vert \mathbf{x}\right\vert =\mathrm{max} _{1\leq {\Greekmath 010B} \leq d}\left\vert x_{{\Greekmath 010B} }\right\vert $ and $\left\Vert \mathbf{x}\right\Vert _{p}=\left( \sum_{{\Greekmath 010B} =1}^{d}x_{{\Greekmath 010B} }^{p}\right) ^{1/p}$. In particular, we use $\left\Vert \mathbf{x}\right\Vert $ to denote the Euclidean norm, i.e., $p=2$. We need the following assumptions:
\begin{enumerate} \item[(A1)] \textit{The additive component functions }$m_{{\Greekmath 010B} }\in C^{\left( 1\right) }\left[ 0,1\right] ,1\leq {\Greekmath 010B} \leq d_{2}$\textit{\ with }$m_{1}\in C^{\left( 2\right) }\left[ 0,1\right] $, $m_{{\Greekmath 010B} }^{\prime }\in \limfunc{Lip}\left( \left[ 0,1\right] ,C_{m}\right) $, $2\leq {\Greekmath 010B} \leq d_{2}$\textit{\ for some constant }$C_{m}>0$\textit{.}
\item[(A2)] \textit{The inverse link function }$b^{\prime }$\textit{\ satisfies: }$b^{\prime }\in C^{2}\left( \mathbb{R}\right) ,b^{\prime \prime }\left( {\Greekmath 0112} \right) >0,{\Greekmath 0112} \in \mathbb{R}$\textit{\ and }$C_{b}> \mathrm{max}_{{\Greekmath 0112} \in \Theta }b^{\prime \prime }\left( {\Greekmath 0112} \right) \geq \mathrm{min}_{{\Greekmath 0112} \in \Theta }b^{\prime \prime }\left( {\Greekmath 0112} \right) >c_{b}$\textit{\ for constants }$C_{b}>c_{b}>0$\textit{. }
\item[(A3)] \textit{The conditional variance function} ${\Greekmath 011B} ^{2}\left( \mathbf{x}\right) $ \textit{is measurable and bounded. The errors }$\left\{
{\Greekmath 0122} _{i}\right\} _{i=1}^{n}$\textit{\ satisfy }${\mbox{\rm E}}\left( {\Greekmath 0122} _{i}|\mathcal{F}_{i}\right) =0,$\textit{\ }${\mbox{\rm E}}\left( \left\vert {\Greekmath 0122} _{i}\right\vert ^{2+{\Greekmath 0111} }\right) \leq C_{{\Greekmath 0111} }$\textit{\ for some }${\Greekmath 0111} \in \left( 1/2,+\infty \right) $\textit{\ with the sequence of }${\Greekmath 011B} $ \textit{-fields: } \newline $\mathcal{F}_{i}={\Greekmath 011B} \left\{ \left( \mathbf{X}_{j}\right) ,j\leq i;{\Greekmath 0122} _{j},j\leq i-1\right\} $\textit{\ for }$i=1,\ldots ,n$.
\item[(A4)] \textit{The density function }$f\left( \mathbf{x}\right) $ \textit{\ of }$\left( X_{1},\ldots,X_{d_{2}}\right) $\textit{\ is continuous and} \begin{equation*} 0<c_{f}\leq \mathrm{inf}_{\mathbf{x}\in \mathbf{{\Greekmath 011F} }}f\left( \mathbf{x} \right) \leq \mathrm{sup}_{\mathbf{x}\in \mathbf{\varkappa }}f\left( \mathbf{ x}\right) \leq C_{f}<\infty . \end{equation*} \textit{The marginal densities }$f_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) $ \textit{\ of }$X_{{\Greekmath 010B} }$\textit{\ have continuous derivatives on }$\left[ 0,1\right] $\textit{\ as well as the uniform upper bound }$C_{f}$\textit{\ and lower bound }$c_{f}$\textit{.}
\item[(A5)] \textit{Constants }$K_{0},{\Greekmath 0115} _{0}\in \left( 0,+\infty \right) $\textit{\ exist such that }${\Greekmath 010B} \left( n\right) \leq K_{0}e^{-{\Greekmath 0115} _{0}n}$ \textit{holds for all }$n$\textit{, with the }$ {\Greekmath 010B} $\textit{-mixing coefficients for }$\left\{ \mathbf{Z}_{i}=\left( \mathbf{T}_{i}^{\top},\mathbf{X}_{i}^{\top},{\Greekmath 0122} _{i}\right) ^{{\top} }\right\} _{i=1}^{n}$\ \textit{\ defined as } \begin{equation*} {\Greekmath 010B} \left( k\right) =\mathrm{sup}_{B\in {\Greekmath 011B} \left\{ \mathbf{Z} _{s},s\leq t\right\} ,C\in {\Greekmath 011B} \left\{ \mathbf{Z}_{s},s\geq t+k\right\} }\left\vert \limfunc{P}\left( B\cap C\right) -\limfunc{P}\left( B\right) \limfunc{P}\left( C\right) \right\vert ,k\geq 1. \end{equation*}
\item[(A5')] $\left\{ \mathbf{Z}_{i}=\left( \mathbf{T}_{i}^{\top},\mathbf{X} _{i}^{\top},{\Greekmath 0122} _{i}\right) ^{\top}\right\} _{i=1}^{n}$\textit{\ are independent and identically distributed.}
\item[(A6)] There exist constants $0<c_{{\Greekmath 010E} }<C_{{\Greekmath 010E} }<\infty $ and $ 0<c_{\mathbf{Q}}<C_{\mathbf{Q}}<\infty $ such that $c_{{\Greekmath 010E} }\leq {\mbox{\rm E}}\left(
\left\vert T_{k}\right\vert ^{2+{\Greekmath 010E} }|\mathbf{X=x}\right) \leq C_{{\Greekmath 010E} } $ for some ${\Greekmath 010E} >0,$ and $c_{\mathbf{Q}}I_{d_{1}\times d_{1}}\leq {\mbox{\rm E}}
\left( \mathbf{TT}^{\top}|\mathbf{X=x}\right) \leq C_{\mathbf{Q} }I_{d_{1}\times d_{1}}$ . \end{enumerate}
Assumptions (A1), (A2) and (A4) are standard in the GAM literature, see [19, 23], while Assumptions (A3) and (A5) are the same for weakly dependent data as in [11, 20] and Assumption (A6) is the same with (C5) in [21]. When categorical predictors presents, we can create dummy variables in $\mathbf{T} _{i}$ and Assumption (A6) is still satisfied.
\section{Oracle estimators\label{sec:smoother}}
The aim of our analysis is to provide precise estimators for the component functions $m_{{\Greekmath 010B} }\left( \cdot \right) $ and parameters $\mathbf{{\Greekmath 010C} }$ . Without loss of generality, we may focus on $m_{1}\left( \cdot \right) $. If all the unknown $\mathbf{{\Greekmath 010C} }$ and other $\left\{ m_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \right\} _{{\Greekmath 010B} =2}^{d_{2}}$ were known, we are in a comfortable situation since the multidimensional modelling problem has reduced to one dimension. As in [17], define for each $x_{1}\in \left[ h,1-h \right] $, $a\in A$ a local quasi log-likelihood function \begin{equation*} \tilde{\ell}_{m_{1}}\left( a,x_{1}\right) =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n} \left[ Y_{i}\left\{ a+m\left( \mathbf{T}_{i},\mathbf{X}_{i\_1}\right) \right\} -b\left\{ a+m\left( \mathbf{T}_{i},\mathbf{X}_{i\_1}\right) \right\} \right] K_{h}\left( X_{i1}-x_{1}\right) \end{equation*} with $m\left( \mathbf{T}_{i},\mathbf{X}_{i\_1}\right) =\mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\mathop{\textstyle \sum }\nolimits_{{\Greekmath 010B} =2}^{d_{2}}m_{{\Greekmath 010B} }\left( \mathbf{X }_{i{\Greekmath 010B} }\right) $ and $K_{h}\left( u\right) =K\left( u/h\right) /h$ a kernel function $K$ with bandwidth $h$ that satisfy
\begin{enumerate} \item[(A7)] \textit{The kernel function} $K\left( \cdot \right) $\textit{\ }$ \in C^{1}[-1,1]$ \textit{is a symmetric pdf.} \textit{The bandwidth }$ h=h_{n} $\textit{\ satisfies }$h={\scriptstyle \mathcal{O}}\left\{ n^{-1/5}(\log n)^{-1/5}\right\} , $ $h^{-1}={\mathcal{O}}\left\{ n^{1/5}\left( \ln n\right) ^{{\Greekmath 010E} }\right\} $ \textit{for some constant }${\Greekmath 010E} >1/5$\textit{.} \end{enumerate}
{Since all the {$\mathbf{{\Greekmath 010C} }$ and $\left\{ m_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \right\} _{{\Greekmath 010B} =2}^{d_{2}}$ are known as obtained from oracle, } one can obtain the so-called oracle estimator } \begin{equation} \tilde{m}_{K,1}\left( x_{1}\right) =\func{argmax}_{a\in A}\tilde{\ell} _{m_{1}}\left( a,x_{1}\right) . \label{DEF:mtilde} \end{equation} Denote $\left\Vert K\right\Vert _{2}^{2}=\int K^{2}\left( u\right) du$, ${\Greekmath 0116} _{2}\left( K\right) =\int K\left( u\right) u^{2}du$ and the scale function $ D_{1}\left( x_{1}\right) $ and bias function $\limfunc{bias} \nolimits_{1}\left( x_{1}\right) $ \begin{equation} D_{1}\left( x_{1}\right) =f_{1}\left( x_{1}\right) {\mbox{\rm E}}\left\{ b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\}
|X_{1}=x_{1}\right\} , \label{DEF:D1x1} \end{equation} \begin{eqnarray} \limfunc{bias}{}_{1}\left( x_{1}\right) &=&{\Greekmath 0116} _{2}\left( K\right) \left[ m_{1}^{\prime \prime }\left( x_{1}\right) f_{1}\left( x_{1}\right) {\mbox{\rm E}}\left[ b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\}
|X_{1}=x_{1}\right] \right. \notag \\ &&+m_{1}^{\prime }\left( x_{1}\right) \frac{\partial }{\partial x_{1}} \left\{ f_{1}\left( x_{1}\right) {\mbox{\rm E}}\left[ b^{\prime \prime }\left\{ m\left(
\mathbf{T},\mathbf{X}\right) \right\} |X_{1}=x_{1}\right] \right\} \notag \\ &&\left. -\left\{ m_{1}^{\prime }\left( x_{1}\right) \right\} ^{2}f_{1}\left( x_{1}\right) {\mbox{\rm E}}\left[ b^{\prime \prime \prime }\left\{
m\left( \mathbf{T},\mathbf{X}\right) \right\} |X_{1}=x_{1}\right] \right] . \label{DEF:b1x1} \end{eqnarray}
\begin{lemma} \label{THM:oracleasydist}Under Assumptions (A1)-(A7), for any $x_{1}\in \left[ h,1-h\right] $, as $n\rightarrow \infty $, the oracle kernel estimator $\tilde{m}_{\limfunc{K},1}\left( x_{1}\right) $ given in (\ref {DEF:mtilde}) satisfies \begin{equation*} \mathrm{sup}_{x_{1}\in \left[ h,1-h\right] }\left\vert \tilde{m}_{\limfunc{K} ,1}\left( x_{1}\right) -m_{1}\left( x_{1}\right) \right\vert ={\mathcal{O}} _{a.s.}\left( \log n/\sqrt{nh}\right) , \end{equation*} \begin{equation*} \sqrt{nh}\left\{ \tilde{m}_{\limfunc{K},1}\left( x_{1}\right) -m_{1}\left( x_{1}\right) -\limfunc{bias}\nolimits_{1}\left( x_{1}\right) h^{2}/D_{1}\left( x_{1}\right) \right\} \overset{\tciLaplace }{\rightarrow } N\left( 0,D_{1}\left( x_{1}\right) ^{-1}v_{1}^{2}\left( x_{1}\right) D_{1}\left( x_{1}\right) ^{-1}\right) , \end{equation*} with \begin{equation*} v_{1}^{2}\left( x_{1}\right) =f_{1}\left( x_{1}\right) {\mbox{\rm E}}\left\{ {\Greekmath 011B}
^{2}\left( \mathbf{T},\mathbf{X}\right) |X_{1}=x_{1}\right\} \left\Vert K\right\Vert _{2}^{2}. \end{equation*} \end{lemma}
Lemma 1 is given in [11]. The above oracle idea applies to the parametric part as well. Define the log-likelihood function \begin{equation} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{a}\right) =n^{-1}\sum\nolimits_{i=1}^{n}\left[ Y_{i}\left\{ \mathbf{a}^{\top }\mathbf{T }_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b\left\{ \mathbf{a}^{\top } \mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \right] , \label{DEF:ltilde} \end{equation} where $m\left( \mathbf{X}_{i}\right) =\sum_{{\Greekmath 010B} =1}^{d_{2}}m_{{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) $. The infeasible estimator of $\mathbf{{\Greekmath 010C} }$ is $\mathbf{\tilde{{\Greekmath 010C}}}=\func{argmax}_{\mathbf{a}\in \mathbb{R}^{1+d_{1}}} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{a}\right) $. Clearly, ${\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{{\Greekmath 010C} }\right) =\mathbf{0}$. To maximize (\ref{DEF:ltilde})
, we have \begin{equation*} n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ Y_{i}\mathbf{T}_{i}-b^{\prime }\left\{ \mathbf{a}^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \mathbf{T}_{i}\right] =\mathbf{0,} \end{equation*} then the empirical likelihood ratio is \begin{equation*}
\tilde{R}\left( \mathbf{a}\right) =\mathrm{max} \left\{ \Pi _{i=1}^{n}np_{i}|\Sigma _{i=1}^{n}p_{i}Z_{i}\left( \mathbf{a}\right) = \mathbf{0},p_{i}\geq 0,\Sigma _{i=1}^{n}p_{i}=1\right\} \end{equation*} where $\ Z_{i}\left( \mathbf{a}\right) =\left[ Y_{i}-b^{\prime }\left\{ \mathbf{a}^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}$.
\begin{theorem} \label{THM:betatilde-beta}(i) Under Assumptions (A1)-(A6), as $n\rightarrow \infty ,$ \begin{equation*} \left\vert \mathbf{\tilde{{\Greekmath 010C}}}-\mathbf{{\Greekmath 010C} }-\left[ {\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\} \mathbf{TT}^{\top} \right] ^{-1}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}\right\vert ={\mathcal{O}} _{a.s.}\left( n^{-1}\left( \log n\right) ^{2}\right) , \end{equation*} \begin{equation*} \sqrt{n}\left( \mathbf{\tilde{{\Greekmath 010C}}}-\mathbf{{\Greekmath 010C} }\right) \overset{ \tciLaplace }{\rightarrow }N\left( \mathbf{0},a\left( {\Greekmath 011E} \right) \left[ {\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\} \mathbf{TT}^{\top}\right] ^{-1}\right) . \end{equation*} (ii) Under Assumptions (A1)-(A4), (A5') and (A6), \begin{equation*} -2\log \tilde{R}\left( \mathbf{{\Greekmath 010C} }\right) \overset{\tciLaplace }{ \rightarrow }{\Greekmath 011F} _{d_{1}}^{2}. \end{equation*} \end{theorem}
Although the oracle estimators $\mathbf{\tilde{{\Greekmath 010C}}}$ and $\tilde{m} _{K,1}\left( x_{1}\right) $ enjoy the desirable theoretical properties in Theorem \ref{THM:betatilde-beta} and Lemma \ref{THM:oracleasydist}, they are not a feasible statistic as its computation is based on the knowledge of unavailable component functions $\left\{ m_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \right\} _{{\Greekmath 010B} =2}^{d_{2}}$.
\section{Spline-backfitted kernel estimators\label{sec:estimator}}
In practice, the rest components $\left\{ m_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \right\} _{{\Greekmath 010B} =2}^{d_{2}}${\ are of course unknown and need to be approximated. We obtain the spline-backfitted kernel estimators by using\ estimations of $\left\{ m_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \right\} _{{\Greekmath 010B} =2}^{d_{2}}$ and the unknown $\mathbf{{\Greekmath 010C} }$ by splines and we employ them to estimate $m_{1}\left( x_{1}\right) $ as in (\ref{DEF:mtilde}). } First, we introduce the linear spline basis as in [10]. Let $0={\Greekmath 0118} _{0}<{\Greekmath 0118} _{1}<\cdots <{\Greekmath 0118} _{N}<{\Greekmath 0118} _{N+1}=1$ denote a sequence of equally spaced points, called interior knots, on $\left[ 0,1\right] $. Denote by $ H=\left( N+1\right) ^{-1}$ the width of each subinterval $\left[ {\Greekmath 0118} _{J},{\Greekmath 0118} _{J+1}\right] ,0\leq J\leq N$ and denote the degenerate knots ${\Greekmath 0118} _{-1}=0,{\Greekmath 0118} _{N+2}=1$. We need the following assumption:
\begin{enumerate} \item[(A8)] \textit{The number of interior knots} $N\thicksim n^{1/4}\log n,$ \textit{\ i.e.,} $c_{N}n^{1/4}\log n\leq N\leq C_{N}n^{1/4}\log n$\textit{\ for some constants }$c_{N}$,$C_{N}$\textit{\ }$>0$. \end{enumerate}
\noindent Following [11], for $J=0,\ldots ,N+1$, define the linear B spline basis: \begin{equation*} b_{J}\left( x\right) =\left( 1-\left\vert x-{\Greekmath 0118} _{J}\right\vert /H\right) _{+}=\left\{ \begin{array}{c} \left( N+1\right) x-J+1 \\ J+1-\left( N+1\right) x \\ 0 \end{array} \begin{array}{c} , \\ , \\ , \end{array} \begin{array}{c} {\Greekmath 0118} _{J-1}\leq x\leq {\Greekmath 0118} _{J} \\ {\Greekmath 0118} _{J}\leq x\leq {\Greekmath 0118} _{J+1} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise} \end{array} \right. , \end{equation*} the space of ${\Greekmath 010B} $-empirically centered linear spline functions on $ [0,1]:$ \begin{equation*} G_{n,{\Greekmath 010B} }^{0}=\left\{ g_{{\Greekmath 010B} }:g_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) =\mathop{\textstyle \sum }\nolimits_{J=0}^{N+1}{\Greekmath 0115} _{J}b_{J}\left( x_{{\Greekmath 010B} }\right) ,{\mbox{\rm E}} _{n}\left\{ g_{{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) \right\} =0\right\} ,1\leq {\Greekmath 010B} \leq d_{2}, \end{equation*} and the space of additive spline functions on $\mathbf{{\Greekmath 011F} }$: \begin{equation*} G_{n}^{0}=\left\{ g\left( \mathbf{x}\right) =\mathop{\textstyle \sum }\nolimits_{{\Greekmath 010B} =1}^{d_{2}}g_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) ;g_{{\Greekmath 010B} }\in G_{n,{\Greekmath 010B} }^{0}\right\} . \end{equation*} Define the log-likelihood function \begin{equation} \hat{L}\left( \mathbf{{\Greekmath 010C} ,}g\right) =n^{-1}\sum\nolimits_{i=1}^{n}\left[ Y_{i}\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}\mathbf{+}g\left( \mathbf{X }_{i}\right) \right\} -b\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T} _{i}+g\left( \mathbf{X}_{i}\right) \right\} \right] ,g\in G_{n}^{0}, \label{DEF:splinelikelihood} \end{equation} which according to Lemma 14 of [19], has a unique maximizer with probability approaching $1$. The multivariate function $m\left( \mathbf{x}\right) $ is then estimated by the additive spline function $\hat{m}\left( \mathbf{x} \right) $ with \begin{equation*} \hat{m}\left( \mathbf{t},\mathbf{x}\right) =\mathbf{\hat{{\Greekmath 010C}}}^{\top } \mathbf{t+}\hat{m}\left( \mathbf{x}\right) =\func{argmax}_{g\in G_{n}^{0}} \hat{L}\left( \mathbf{{\Greekmath 010C} ,}g\right) . \end{equation*} Since $\hat{m}\left( \mathbf{x}\right) \in G_{n}^{0}$, one can write $\hat{m} \left( \mathbf{x}\right) =\sum_{{\Greekmath 010B} =1}^{d_{2}}\hat{m}_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) $ for $\hat{m}_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) \in G_{n,{\Greekmath 010B} }^{0}$. Next define the log-likelihood function \begin{equation} \hat{\ell}_{m_{1}}\left( a,x_{1}\right) =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ Y_{i}\left\{ a+\hat{m}\left( \mathbf{T}_{i},\mathbf{X}_{i\_1}\right) \right\} -b\left\{ a+\hat{m}\left( \mathbf{T}_{i},\mathbf{X}_{i\_1}\right) \right\} \right] K_{h}\left( X_{i1}-x_{1}\right) \label{DEF:lhat} \end{equation} where $\hat{m}\left( \mathbf{T}_{i},\mathbf{X}_{i\_1}\right) =\mathbf{\hat{ {\Greekmath 010C}}}^{\top }\mathbf{T}_{i}+\sum_{{\Greekmath 010B} =2}^{d_{2}}\hat{m}_{{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) $. Define the SBK estimator as: \begin{equation} \hat{m}_{\func{SBK},1}\left( x_{1}\right) =\func{argmax}_{a\in A}\hat{\ell} _{m_{1}}\left( a,x_{1}\right) . \label{DEF:mstarhat} \end{equation}
\begin{theorem} \label{THM:mhat-mtilde}Under Assumptions (A1)-(A8), as $n\rightarrow \infty $ , $\hat{m}_{\func{SBK},1}\left( x_{1}\right) $ is oracally efficient, \begin{equation*} \mathrm{sup}_{x_{1}\in \lbrack 0,1]}\left\vert \hat{m}_{\func{SBK},1}\left( x_{1}\right) -\tilde{m}_{K,1}\left( x_{1}\right) \right\vert ={\mathcal{O}} _{a.s.}\left( n^{-1/2}\log n\right) . \end{equation*} \end{theorem}
The following corollary is a consequence of Lemma \ref{THM:oracleasydist} and Theorem \ref{THM:mhat-mtilde}.
\begin{corollary} \label{COR:SBKproperties}Under Assumptions (A1)-(A8), as $n\rightarrow \infty ,$ the SBK estimator $\hat{m}_{\func{SBK},1}\left( x_{1}\right) $ given in (\ref{DEF:mstarhat}) satisfies \begin{equation*} \mathrm{sup}_{x_{1}\in \left[ h,1-h\right] }\left\vert \hat{m}_{\func{SBK} ,1}\left( x_{1}\right) -m_{1}\left( x_{1}\right) \right\vert ={\mathcal{O}} _{a.s.}\left( \log n/\sqrt{nh}\right) \end{equation*} and for any $x_{1}\in \left[ h,1-h\right] $, with $\limfunc{bias} \nolimits_{1}\left( x_{1}\right) $ as in (\ref{DEF:b1x1}) and $D_{1}\left( x_{1}\right) $ in (\ref{DEF:D1x1}) \begin{equation*} \sqrt{nh}\left\{ \hat{m}_{\func{SBK},1}\left( x_{1}\right) -m_{1}\left( x_{1}\right) -\limfunc{bias}\nolimits_{1}\left( x_{1}\right) h^{2}/D_{1}\left( x_{1}\right) \right\} \overset{\tciLaplace }{\rightarrow } N\left( 0,D_{1}\left( x_{1}\right) ^{-1}v_{1}^{2}\left( x_{1}\right) D_{1}\left( x_{1}\right) ^{-1}\right) . \end{equation*} \end{corollary}
Denote $a_{h}=\sqrt{-2\mathop{\rm{log}}h},C\left( K\right) =\left\Vert K^{\prime }\right\Vert _{2}^{2}\left\Vert K\right\Vert _{2}^{-2}$ and for any ${\Greekmath 010B} \in \left( 0,1\right) $, the quantile \begin{equation*} Q_{h}({\Greekmath 010B} )=a_{h}+a_{h}^{-1}\left[ \mathop{\rm{log}}\left\{ \sqrt{C\left( K\right) }/\left( 2{\Greekmath 0119} \right) \right\} -\mathop{\rm{log}}\left\{ - \mathop{\rm{log}}\sqrt{1-{\Greekmath 010B} }\right\} \right] . \end{equation*} Also with $D_{1}\left( x_{1}\right) $ and $v_{1}^{2}\left( x_{1}\right) $ given in (\ref{DEF:D1x1}), define \begin{equation*} {\Greekmath 011B} _{n}\left( x_{1}\right) =n^{-1/2}h^{-1/2}v_{1}\left( x_{1}\right) D_{1}^{-1}\left( x_{1}\right) . \end{equation*}
\begin{theorem} \label{THM:bands}Under Assumptions (A1)-(A4), (A5'), (A6)-(A8), as $ n\rightarrow \infty ,$ \begin{equation*} \lim_{n\rightarrow \infty }\Pr \left\{ \mathrm{sup}_{x_{1}\in \lbrack h,1-h]}\left\vert \hat{m}_{\func{SBK},1}\left( x_{1}\right) -m_{1}\left( x_{1}\right) \right\vert /{\Greekmath 011B} _{n}\left( x_{1}\right) \leq Q_{h}\left( {\Greekmath 010B} \right) \right\} =1-{\Greekmath 010B} . \end{equation*} A $100\left( 1-{\Greekmath 010B} \right) \%$ simultaneous confidence band for $ m_{1}\left( x_{1}\right) $ is \begin{equation*} \hat{m}_{\func{SBK},1}\left( x_{1}\right) \pm {\Greekmath 011B} _{n}\left( x_{1}\right) Q_{h}\left( {\Greekmath 010B} \right) . \end{equation*} \end{theorem}
In fact, $\mathbf{\hat{{\Greekmath 010C}}}$ obtained by maximizing (\ref {DEF:splinelikelihood}) is equivalent to $\mathbf{\hat{{\Greekmath 010C}}}_{\func{SBK}}= \func{argmax}_{\mathbf{a}\in \mathbb{R}^{1+d_{1}}}\hat{\ell}_{\mathbf{{\Greekmath 010C} } }\left( \mathbf{a}\right) $ with \begin{equation*} \hat{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{a}\right) =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ Y_{i}\left\{ \mathbf{a}^{\top}\mathbf{T }_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} -b\left\{ \mathbf{a} ^{\top}\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \end{equation*} in which $\hat{m}\left( \mathbf{X}_{i}\right) =\sum_{{\Greekmath 010B} =1}^{d_{2}}\hat{m }_{{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) $. The empirical likelihood ratio is \begin{equation*}
\hat{R}\left( \mathbf{a}\right) =\mathrm{max} \left\{ \Pi _{i=1}^{n}np_{i}|\Sigma _{i=1}^{n}p_{i}\hat{Z}_{i}\left( \mathbf{a}\right) = \mathbf{0},p_{i}\geq 0,\Sigma _{i=1}^{n}p_{i}=1\right\} \end{equation*} where $\ \hat{Z}_{i}\left( \mathbf{a}\right) =\left[ Y_{i}-b^{\prime }\left\{ \mathbf{a}^{\top}\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}$. Similar to Theorem \ref{THM:mhat-mtilde}, the main result shows that the difference between $\mathbf{\hat{{\Greekmath 010C}}}$ and its infeasible counterpart $\mathbf{\tilde{{\Greekmath 010C}}}$ is asymptotically negligible.
\begin{theorem} \label{THM:betahat-beta}(i) Under Assumptions (A1)-(A6) and (A8), as $ n\rightarrow \infty $, $\mathbf{\hat{{\Greekmath 010C}}}$ is oracally efficient, i.e., $ \sqrt{n}\left( \hat{{\Greekmath 010C}}_{k}-\tilde{{\Greekmath 010C}}_{k}\right) \overset{p}{ \rightarrow }0$ for $0\leq k\leq d_{1}$ and hence \begin{equation*} \sqrt{n}\left( \mathbf{\hat{{\Greekmath 010C}}}-\mathbf{{\Greekmath 010C} }\right) \overset{ \tciLaplace }{\rightarrow }N\left( \mathbf{0},a\left( {\Greekmath 011E} \right) \left[ {\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\} \mathbf{TT}^{\top}\right] ^{-1}\right) . \end{equation*} (ii) Under Assumptions (A1)-(A4), (A5'), (A6) and (A8), as $n\rightarrow \infty ,$ \begin{equation*} \mathrm{sup} \left\vert -2\log \hat{R}\left( \mathbf{{\Greekmath 010C} }\right) +2\log \tilde{R}\left( \mathbf{{\Greekmath 010C} }\right) \right\vert ={\scriptstyle \mathcal{O}}_{p}\left( 1\right) , \end{equation*} so \begin{equation*} -2\log \hat{R}\left( \mathbf{{\Greekmath 010C} }\right) \overset{\tciLaplace }{ \rightarrow }{\Greekmath 011F} _{d_{1}}^{2}. \end{equation*} \end{theorem}
As a reviewer pointing out, an obvious advantage of GAPLM over GAM is the capability of including categorical predictors. Since $m_{{\Greekmath 010B} }$ is not a function of $\mathbf{T}$ in GAPLM, so we can simply create dummy variables to represent the categorical effects and use spline estimation. [14] proposed spline estimation combined with categorical kernel functions to handle the case when function $m_{{\Greekmath 010B} }$ depends on categorical predictors.
\section{Examples\label{sec:examples}}
We have applied the SBK procedure to both simulated (Example 1) and real (Example 2) data and implemented our algorithms with the following rule-of-thumb number of interior knots \begin{equation*} N=N_{n}=\mathrm{min} \left( \left\lfloor n^{1/4}\log n\right\rfloor +1,\left\lfloor n/4d-1/d\right\rfloor -1\right) \end{equation*} which satisfies (A8), i.e., $N=N_{n}\thicksim n^{1/4}\log n$, and ensures that the number of parameters in the linear least squares problem is less than $n/4$, i.e., $1+d\left( N+1\right) \leq n/4$. The bandwidth of $ h_{{\Greekmath 010B} }$ is computed as [11] in the asymptotically optimal way.
\subsection{\textbf{Example 1}}
The data are generated from the model \begin{equation*}
\Pr (Y=1|\mathbf{T=t,X=x})=b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top}\mathbf{ T+}\sum\nolimits_{{\Greekmath 010B} =1}^{d_{2}}m_{{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) \right\} ,b^{\prime }\left( x\right) =\frac{e^{x}}{1+e^{x}} \end{equation*} with $d_{1}=2,d_{2}=5,\mathbf{{\Greekmath 010C} }=\left( {\Greekmath 010C} _{0},{\Greekmath 010C} _{1},{\Greekmath 010C} _{2}\right) ^\top =\left( 1,1,1,\right) ^{\top},m_{1}\left( x\right) =m_{2}\left( x\right) =m_{3}\left( x\right) =\sin \left( 2{\Greekmath 0119} x\right) $, $ m_{4}\left( x\right) =\Phi \left( 6x-3\right) -0.5$ and $m_{5}\left( x\right) =x^{2}-1/3$, where $\Phi $ is the standard normal cdf. The predictors are generated by transforming the following vector autoregression (VAR) equation for $0\leq r_{1},r_{2}<1,1\leq i\leq n$, $\mathbf{Z}_{0}= \mathbf{0}$ \begin{eqnarray*} \mathbf{Z}_{i} &=&r_{1}\mathbf{Z}_{i-1}+\mathbf{{\Greekmath 0122} }_{i},\mathbf{ {\Greekmath 0122} }_{i}\sim \mathcal{N} \left( 0,\Sigma \right) ,\Sigma =\left( 1-r_{2}\right) \mathbf{I}_{d\times d}+r_{2}\mathbf{1}_{d}\mathbf{1}_{d}^{{\top} },d=d_{1}+d_{2},1\leq i\leq n, \\ \mathbf{T}_{i} &=&\left( 1,Z_{i1},\ldots,Z_{id_{1}},\right) ^{\top},X_{i{\Greekmath 010B} }=\Phi \left( \sqrt{1-r_{1}^{2}}Z_{i{\Greekmath 010B} }\right) ,1\leq i\leq n,1+d_{1}\leq {\Greekmath 010B} \leq d_{1}+d_{2}, \end{eqnarray*} with stationary $\mathbf{Z}_{i}=\left( Z_{i1},\ldots,Z_{id}\right) ^{\top}\sim N\left\{ 0,\left( 1-r_{1}^{2}\right) ^{-1}\Sigma \right\} ,$ $ \mathbf{1}_{d}=\left( 1,\ldots,1\right) ^{\top}$ and $\mathbf{I}_{d\times d}$ is the $d\times d$ identity matrix. The $X$ is transformed from $Z$ to satisfy Assumption (A4). In this study, we selected four scenarios: $r_{1}=0$ , $r_{2}=0;r_{1}=0.5,$ $r_{2}=0;r_{1}=0$, $r_{2}=0.5;r_{1}=0.5,r_{2}=0.5$. The parameter $r_{1}$ controls the dependence between observations and $ r_{2} $ controls the correlation between variables. {In the selected scenarios, $r_{1}=0$ indicates independent observations and $r_{1}=0.5$ $ {\Greekmath 010B} $-mixing observations, $r_{2}=0$ indicates independent variables and $ r_{2}=0.5$ correlated variables within each observation. }Define the empirical relative efficiency of $\hat{{\Greekmath 010C}}_{1}$ with respect to $\tilde{ {\Greekmath 010C}}_{1}$ as $\func{EFF}_{r}\left( \hat{{\Greekmath 010C}}_{1}\right) =\left\{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ MSE}\left( \tilde{{\Greekmath 010C}}_{1}\right) /\RIfM@\expandafter\text@\else\expandafter\mbox\fi{MSE}\left( \hat{{\Greekmath 010C}}_{1}\right) \right\} ^{1/2}.${\ }
Table \ref{Tablesimu1} shows the mean of bias, variances, MSEs and EFFs of $ \hat{{\Greekmath 010C}}_{1}$ for $R=1000$ with sample sizes $\ n=500,1000,2000,4000$. The results show that the estimator works as the asymptotic theory indicates, see Theorem \ref{THM:betahat-beta} (i). \begin{table}[th] \begin{center} \begin{tabular}{cccccc} \hline\hline $r$ & $n$ & $10\times \overline{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{BIAS}}$ & $100\times \overline{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ VARIANCE}}$ & $100\times \overline{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{MSE}}$ & $\overline{\func{EFF}} \left( \hat{{\Greekmath 010C}}_{1}\right) $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0 \\ r_{2}=0 \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.509 \\ 0.727 \\ 0.408 \\ 0.240 \end{array} $ & $ \begin{array}{c} 2.018 \\ 1.197 \\ 0.626 \\ 0.282 \end{array} $ & $ \begin{array}{c} 4.298 \\ 1.726 \\ 0.793 \\ 0.339 \end{array} $ & $ \begin{array}{c} 0.8436 \\ 0.8749 \\ 0.9189 \\ 0.9534 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0.5 \\ \multicolumn{1}{l}{r_{2}=0} \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.473 \\ 0.834 \\ 0.476 \\ 0.260 \end{array} $ & $ \begin{array}{c} 3.136 \\ 1.287 \\ 0.674 \\ 0.202 \end{array} $ & $ \begin{array}{c} 5.306 \\ 1.983 \\ 0.901 \\ 0.270 \end{array} $ & $ \begin{array}{c} 0.8392 \\ 0.8873 \\ 0.9294 \\ 0.9665 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{l} r_{1}=0 \\ \multicolumn{1}{c}{r_{2}=0.5} \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.327 \\ 0.699 \\ 0.665 \\ 0.390 \end{array} $ & $ \begin{array}{c} 3.880 \\ 1.851 \\ 0.739 \\ 0.290 \end{array} $ & $ \begin{array}{c} 5.642 \\ 2.339 \\ 1.182 \\ 0.442 \end{array} $ & $ \begin{array}{c} 0.8475 \\ 0.8856 \\ 0.9353 \\ 0.9479 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0.5 \\ r_{2}=0.5 \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.635 \\ 0.901 \\ 0.529 \\ 0.209 \end{array} $ & $ \begin{array}{c} 4.230 \\ 1.190 \\ 0.806 \\ 0.366 \end{array} $ & $ \begin{array}{c} 6.903 \\ 2.002 \\ 1.086 \\ 0.410 \end{array} $ & $ \begin{array}{c} 0.8203 \\ 0.8758 \\ 0.9304 \\ 0.9483 \end{array} $ \\ \hline \end{tabular} \end{center} \caption{The mean of $10\times $Bias, $100\times $Variances, $100\times $ MSEs and EFFs of$\ \hat{\protect{\Greekmath 010C}}_{1}$ from $1000$ replications. } \label{Tablesimu1} \end{table} Figure \ref{Figuresimu1} shows the kernel densities of $\hat{{\Greekmath 010C}}_{1}$s for $n=500,1000,2000,4000$ from $1000$ replications, again the theoretical properties are supported. \begin{figure}\label{Figuresimu1}
\end{figure} Table \ref{Tablesimu5} shows the simulation results of the empirical likelihood confidence interval for ${\Greekmath 010C} $ with $n=500,1000,2000,4000$ and $ r_{1}=0,${$r_{2}=0$} from $1000$ replications. The mean and standard deviation of $-2\log \hat{R}\left( \mathbf{{\Greekmath 010C} }\right) +2\log \tilde{R} \left( \mathbf{{\Greekmath 010C} }\right) $ (DIFF) support the oracle efficiency in Theorem \ref{THM:betahat-beta} (ii). The performance of empirical likelihood confidence interval are compared with the wald-type one and it is clear that they have similar performance but empirical likelihood confidence interval has better coverage ratio and shorter average length. \begin{table}[th] \begin{center} \begin{tabular}{cccccc} \hline\hline & & $n=500$ & $n=1000$ & $n=2000$ & $n=4000$ \\ \hline Coverage Ratio & $ \begin{array}{c} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{EL} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{Wald} \end{array} $ & $ \begin{array}{c} 0.923 \\ 0.918 \end{array} $ & $ \begin{array}{c} 0.941 \\ 0.934 \end{array} $ & $ \begin{array}{c} 0.946 \\ 0.944 \end{array} $ & $ \begin{array}{c} 0.951 \\ 0.948 \end{array} $ \\ \hline Average Length & $ \begin{array}{c} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{EL} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{Wald} \end{array} $ & $ \begin{array}{c} 1.2675 \\ 1.4073 \end{array} $ & $ \begin{array}{c} 0.9474 \\ 1.0447 \end{array} $ & $ \begin{array}{c} 0.7105 \\ 0.7480 \end{array} $ & $ \begin{array}{c} 0.5339 \\ 0.5625 \end{array} $ \\ \hline $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{DIFF}$ & $ \begin{array}{c} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{MEAN} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{SD} \end{array} $ & $ \begin{array}{c} 0.1213 \\ 0.5199 \end{array} $ & $ \begin{array}{c} 0.1023 \\ 0.4703 \end{array} $ & $ \begin{array}{c} 0.0981 \\ 0.3667 \end{array} $ & $ \begin{array}{c} 0.0726 \\ 0.3242 \end{array} $ \\ \hline \end{tabular} \end{center} \caption{Coverage ratios and average length of the empirical likelihood confidence interval (EL) and Wald-type confidence interval for $\protect {\Greekmath 010C} _{1}$ for $n=500,1000,2000,4000$ with $r_{1}=0$ from $1000$ replications. DIFF$=-2\log \hat{R}\left( \mathbf{\protect{\Greekmath 010C} }\right) +2\log \tilde{R}\left( \mathbf{\protect{\Greekmath 010C} }\right) $ is the difference between $-2\log \hat{R}\left( \mathbf{\protect{\Greekmath 010C} }\right) $ and $-2\log \tilde{R}\left( \mathbf{\protect{\Greekmath 010C} }\right) $.} \label{Tablesimu5} \end{table} Next for ${\Greekmath 010B} =1,\ldots ,5$, let $X_{{\Greekmath 010B} ,\mathrm{min}}^{i}$, $ X_{{\Greekmath 010B} ,\mathrm{max}}^{i}$ denote the smallest and largest observations of the variable $X_{{\Greekmath 010B} }$ in the $i$ -th replication. The component functions $\left\{ m_{{\Greekmath 010B} }\right\} _{{\Greekmath 010B} =1}^{5}$ are estimated on equally spaced points $\left\{ x_{t}\right\} _{t=0}^{100}$ with $ 0=x_{0}<\ldots <x_{100}=1$ and the estimator of $m_{{\Greekmath 010B} }$ in the $r$-th sample\ as $\hat{m}_{\func{SBK},{\Greekmath 010B} ,r}$. The (mean) average squared error (ASE and MASE) are: \begin{eqnarray*} \limfunc{ASE}(\hat{m}_{\func{SBK},{\Greekmath 010B} ,r}) &=&101^{-1}\mathop{\textstyle \sum }\nolimits_{t=0}^{100}\left\{ \hat{m}_{\func{SBK},{\Greekmath 010B} ,r}(x_{t})-m_{{\Greekmath 010B} }(x_{t})\right\} ^{2}, \\ \limfunc{MASE}(\hat{m}_{\func{SBK},{\Greekmath 010B} }) &=&R^{-1}\mathop{\textstyle \sum }\nolimits_{r=1}^{R}\limfunc{ASE}(\hat{m}_{\func{SBK},{\Greekmath 010B} ,r}). \end{eqnarray*} In order to examine the efficiency of $\hat{m}_{\func{SBK},{\Greekmath 010B} }$ relative to the oracle estimator\ $\tilde{m}_{K,{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) $, both are computed using the same data-driven bandwidth $\hat{h} _{{\Greekmath 010B} ,\limfunc{opt}},$ described in Section 5 of [11]. Define the empirical relative efficiency of $\hat{m}_{\func{SBK},{\Greekmath 010B} }$ with respect to $\tilde{m}_{K,{\Greekmath 010B} }$ as \begin{equation*} \func{EFF}_{r}\left( \hat{m}_{\func{SBK},{\Greekmath 010B} }\right) =\left[ \frac{ \sum\nolimits_{t=0}^{100}\left\{ \tilde{m}_{K,{\Greekmath 010B} }\left( x_{t}\right) -m_{{\Greekmath 010B} }(x_{t})\right\} ^{2}}{\sum\nolimits_{t=0}^{100}\left\{ \hat{m}_{ \func{SBK},{\Greekmath 010B} ,r}(x_{t})-m_{{\Greekmath 010B} }(x_{t})\right\} ^{2}}\right] ^{1/2}. \end{equation*} EFF measures the relative efficiency of the SBK estimator to the oracle estimator. For increasing sample size, it should increase to 1 by {Theorem \ref{THM:mhat-mtilde}.} Table \ref{Tablesimu2} shows the MASEs of $\tilde{m} _{K,1}$, $\hat{m}_{\func{SBK},1}$and the average of EFFs from $1000$ replications for $n=500$, $1000$, $2000$, $4000$. It is clear that the MASEs of both SBK estimator and the oracle estimator decrease when sample sizes increase, and the SBK estimator performs as well asymptotically as the oracle estimator, see Theorem \ref{THM:mhat-mtilde}. \begin{table}[th] \begin{center} \begin{tabular}{ccccc} \hline\hline $r$ & $n$ & $100\times \limfunc{MASE}\left( \tilde{m}_{\limfunc{K},{\Greekmath 010B} }\right) $ & $100\times \limfunc{MASE}\left( \hat{m}_{\func{SBK},{\Greekmath 010B} }\right) $ & $\overline{\func{EFF}}\left( \hat{m}_{\func{SBK},1}\right) $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0 \\ r_{2}=0 \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 4.482 \\ 2.418 \\ 1.582 \\ 1.212 \end{array} $ & $ \begin{array}{c} 4.603 \\ 2.503 \\ 1.613 \\ 1.247 \end{array} $ & $ \begin{array}{c} 0.9501 \\ 0.9809 \\ 0.9854 \\ 0.9923 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0.5 \\ \multicolumn{1}{l}{r_{2}=0} \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 4.060 \\ 2.592 \\ 1.746 \\ 1.194 \end{array} $ & $ \begin{array}{c} 4.322 \\ 2.649 \\ 1.714 \\ 1.218 \end{array} $ & $ \begin{array}{c} 0.9445 \\ 0.9767 \\ 0.9832 \\ 0.9936 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{l} r_{1}=0 \\ \multicolumn{1}{c}{r_{2}=0.5} \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 4.845 \\ 2.935 \\ 1.951 \\ 1.515 \end{array} $ & $ \begin{array}{c} 6.348 \\ 3.559 \\ 2.177 \\ 1.648 \end{array} $ & $ \begin{array}{c} 0.8827 \\ 0.8755 \\ 0.9494 \\ 0.9795 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0.5 \\ r_{2}=0.5 \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 5.656 \\ 2.804 \\ 1.886 \\ 1.525 \end{array} $ & $ \begin{array}{c} 7.114 \\ 3.570 \\ 2.089 \\ 1.634 \end{array} $ & $ \begin{array}{c} 0.8722 \\ 0.8951 \\ 0.9478 \\ 0.9744 \end{array} $ \\ \hline \end{tabular} \end{center} \caption{The $100\times $MASEs of $\tilde{m}_{\limfunc{K},1}$, $\hat{m}_{ \func{SBK},1}$and $\overline{\func{EFF}}$s for $n=500$, $1000$, $2000$, $4000 $ from $1000$ replications.} \label{Tablesimu2} \end{table}
To have an impression of the actual function estimates, for $r_{1}=0$, $ r_{2}=0.5$ with sample size $n=500$, $1000$, $2000$, $4000$, we have plotted the SBK estimators and their 95\% asymptotic SCCs (red solid lines), pointwise confidence intervals (red dashed lines), oracle estimators (blue dashed lines) for the true functions $m_{1}$ (thick black lines) in Figure \ref{Figuresimu2}. Here we use $r_{1}=0$ because we want to give the 95\% asymptotic SCCs, which need the observations be i.i.d to satisfy Assumption (A5'). As expected by theoretical results, the estimation is closer to the real function and the confidence band is narrower as sample size increasing. \begin{figure}\label{Figuresimu2}
\end{figure}
To compare the prediction performance of GAM\ and GAPLM, we introduce CAP and AR first. For any score function $S$, one defines its alarm rate $ F\left( s\right) =\Pr \left( S\leq s\right) $ and the hit rate $F_{\limfunc{D }}\left( s\right) =\Pr \left( S\leq s\left\vert \limfunc{D}\right. \right) $ where $\limfunc{D}$ represents the conditioning event of \textquotedblleft default". Define the Cumulative Accuracy Profile ($\limfunc{CAP}$) curve as \begin{equation} \limfunc{CAP}\left( u\right) =F_{\limfunc{D}}\left\{ F^{-1}\left( u\right) \right\} ,u\in \left( 0,1\right) , \label{DEF:CAP} \end{equation} which is the percentage of default-infected obligators that are found among the first (according to their scores) $100u\%$ of all obligators. A perfect rating method assigns all lowest scores to exactly the defaulters, so its CAP curve linearly increases up and then stays at $1$, in other words, $ \limfunc{CAP}_{\limfunc{P}}\left( u\right) =\mathrm{min} \left( u/p,1\right) ,u\in \left( 0,1\right) $, where $p$ denotes the unconditional default probability. In contrast, a noninformative rating method with zero discriminatory power displays a diagonal line $\limfunc{CAP}_{\limfunc{N} }\left( u\right) =u,u\in \left( 0,1\right) $. The CAP curve of a given scoring method $S$ always locates between these two extremes and give information about its performance.
The area between the CAP curve and the noninformative diagonal $\limfunc{CAP} _{\limfunc{N}}\left( u\right) \equiv u$ is $a_{R}$, whereas $a_{P}$ is the area between the perfect CAP curve $\limfunc{CAP}_{\limfunc{P}}\left( u\right) $ and the noninformative diagonal $\limfunc{CAP}_{\limfunc{N} }\left( u\right) $. Thus the CAP can be measured for example by Accuracy Ratio (AR): the ratio of $a_{R}$ and $a_{P}$. \begin{equation*} \limfunc{AR}=\frac{a_{R}}{a_{P}}=\frac{2\int_{0}^{1}\limfunc{CAP}\left( u\right) du-1}{1-p}, \end{equation*} where $\limfunc{CAP}\left( u\right) $ is given in (\ref{DEF:CAP}). The AR takes value in $\left[ 0,1\right] $, with value $0$ corresponding to the noninformative scoring, and $1$ the perfect scoring method. A higher AR indicates an overall higher discriminatory power of a method. Table \ref {Tablesimu3}\ shows the average and standard deviations of the ARs from $1000 $ replications using $k$-fold cross-validation with $k=2,10,100$ for $r_{1}=0 $, $r_{2}=0$ and $n=500$, $1000$, $2000$, $4000$. In each replication, we randomly divide the set of observations into $k$ equal size folds and use th rest $k-1$ folds as training data set to make prediction for each fold. After we obtain all the prediction for each observation in the data set, we compute the CAP\ and AR based on above formula. It is clear that GAPLM has best predication accuracy. \begin{table}[th] \begin{center} \begin{tabular}{ccccc} \hline\hline $n$ & & $k=2$ & $k=10$ & $k=100$ \\ \hline $500$ & \begin{tabular}{c} GLM \\ GAM \\ GAPLM \end{tabular} & \begin{tabular}{c} $0.6287\left( 0.0436\right) $ \\ $0.6222\left( 0.0732\right) $ \\ $0.6511\left( 0.0479\right) $ \end{tabular} & \begin{tabular}{c} $0.6412\left( 0.0397\right) $ \\ $0.6706\left( 0.0393\right) $ \\ $0.6828\left( 0.0377\right) $ \end{tabular} & \begin{tabular}{c} $0.6438\left( 0.0390\right) $ \\ $0.6756\left( 0.0400\right) $ \\ $0.6861\left( 0.0391\right) $ \end{tabular} \\ \hline $1000$ & \begin{tabular}{c} GLM \\ GAM \\ GAPLM \end{tabular} & \begin{tabular}{c} $0.6429\left( 0.0282\right) $ \\ $0.6735\left( 0.0438\right) $ \\ $0.6861\left( 0.0298\right) $ \end{tabular} & \begin{tabular}{c} $0.6476\left( 0.0268\right) $ \\ $0.6863\left( 0.0326\right) $ \\ $0.6968\left( 0.0254\right) $ \end{tabular} & \begin{tabular}{c} $0.6488\left( 0.0268\right) $ \\ $0.6929\left( 0.0261\right) $ \\ $0.7001\left( 0.0258\right) $ \end{tabular} \\ \hline $2000$ & \begin{tabular}{c} GLM \\ GAM \\ GAPLM \end{tabular} & \begin{tabular}{c} $0.6474\left( 0.0204\right) $ \\ $0.6842\left( 0.0615\right) $ \\ $0.6984\left( 0.0204\right) $ \end{tabular} & \begin{tabular}{c} $0.6513\left( 0.0195\right) $ \\ $0.6984\left( 0.0286\right) $ \\ $0.7067\left( 0.0178\right) $ \end{tabular} & \begin{tabular}{c} $0.6519\left( 0.0188\right) $ \\ $0.7000\left( 0.0185\right) $ \\ $0.7057\left( 0.0178\right) $ \end{tabular} \\ \hline $4000$ & \begin{tabular}{c} GLM \\ GAM \\ GAPLM \end{tabular} & \begin{tabular}{c} $0.6507\left( 0.0134\right) $ \\ $0.6889\left( 0.0243\right) $ \\ $0.7056\left( 0.0130\right) $ \end{tabular} & \begin{tabular}{c} $0.6522\left( 0.0136\right) $ \\ $0.6968\left( 0.0403\right) $ \\ $0.7110\left( 0.0124\right) $ \end{tabular} & \begin{tabular}{c} $0.6529\left( 0.0132\right) $ \\ $0.7079\left( 0.0164\right) $ \\ $0.7119\left( 0.0119\right) $ \end{tabular} \\ \hline \end{tabular} \end{center} \caption{The mean and standard deviation (in parentheses) of Accuracy Ratio (AR) values for GLM, GAM, GAPLM for $r_{1}=0$, $r_{2}=0$ from 1000 replications. } \label{Tablesimu3} \end{table}
Last, to show the estimation performance when $\mathbf{T}$ has categorical variables, we generate data using the same model above but add one more categorical variable, i.e., $d_{1}=3$, $\mathbf{{\Greekmath 010C} }=\left( {\Greekmath 010C} _{0},{\Greekmath 010C} _{1},{\Greekmath 010C} _{2},{\Greekmath 010C} _{3}\right) ^{\top}=\left( 1,1,1,1\right) ^{ {\top}}$, $T_{3}=\left\{ 0,1\right\} $ with probability $0.5$ for $T_{3}=1$ and independent with the other variables $T$ and $X$. Table \ref{Tablesimu4} shows the bias, variances, MSEs and EFFs of $\hat{{\Greekmath 010C}}_{3}$ for $R=1000$ with sample sizes $\ n=500,1000,2000,4000$. The results show that the estimator works as the asymptotic theory indicates. \begin{table}[th] \begin{center} \begin{tabular}{cccccc} \hline\hline $r$ & $n$ & $10\times \overline{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{BIAS}}$ & $100\times \overline{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ VARIANCE}}$ & $100\times \overline{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{MSE}}$ & $\overline{\func{EFF}} \left( \hat{{\Greekmath 010C}}_{3}\right) $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0 \\ r_{2}=0 \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.476 \\ 0.770 \\ 0.448 \\ 0.315 \end{array} $ & $ \begin{array}{c} 10.129 \\ 4.437 \\ 1.846 \\ 0.937 \end{array} $ & $ \begin{array}{c} 12.309 \\ 5.031 \\ 2.047 \\ 1.037 \end{array} $ & $ \begin{array}{c} 0.7634 \\ 0.8343 \\ 0.8929 \\ 0.9572 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0.5 \\ \multicolumn{1}{l}{r_{2}=0} \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.336 \\ 0.833 \\ 0.423 \\ 0.302 \end{array} $ & $ \begin{array}{c} 10.329 \\ 4.221 \\ 1.952 \\ 0.944 \end{array} $ & $ \begin{array}{c} 12.115 \\ 4.916 \\ 2.132 \\ 1.036 \end{array} $ & $ \begin{array}{c} 0.7445 \\ 0.8267 \\ 0.8832 \\ 0.9436 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{l} r_{1}=0 \\ \multicolumn{1}{c}{r_{2}=0.5} \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.441 \\ 0.803 \\ 0.489 \\ 0.328 \end{array} $ & $ \begin{array}{c} 10.154 \\ 4.446 \\ 2.136 \\ 0.924 \end{array} $ & $ \begin{array}{c} 12.231 \\ 5.114 \\ 2.376 \\ 1.032 \end{array} $ & $ \begin{array}{c} 0.7556 \\ 0.8430 \\ 0.8785 \\ 0.9572 \end{array} $ \\ \hline \multicolumn{1}{l}{$ \begin{array}{c} r_{1}=0.5 \\ r_{2}=0.5 \end{array} $} & $ \begin{array}{c} 500 \\ 1000 \\ 2000 \\ 4000 \end{array} $ & $ \begin{array}{c} 1.475 \\ 0.812 \\ 0.524 \\ 0.302 \end{array} $ & $ \begin{array}{c} 11.014 \\ 4.464 \\ 1.970 \\ 0.966 \end{array} $ & $ \begin{array}{c} 13.190 \\ 5.124 \\ 2.245 \\ 1.058 \end{array} $ & $ \begin{array}{c} 0.7794 \\ 0.8314 \\ 0.8852 \\ 0.9529 \end{array} $ \\ \hline \end{tabular} \end{center} \caption{The mean of $10\times $Bias, $100\times $Variances, $100\times $ MSEs and EFFs of$\ \hat{\protect{\Greekmath 010C}}_{3}$ from 1000 replications. } \label{Tablesimu4} \end{table}
\subsection{\textbf{Example 2}}
The credit reform database, provided by the Research Data Center (RDC) of the Humboldt Universit\"{a}t zu Berlin, was studied by using GAM model in [11]. The data set contains $d=8$ financial ratios, which are shown in Table \ref{Tablevariable}, such as Operating\_Income/Total\_Assets and log(Total\_Assets), of 18610 solvent ($Y=0$) and 1000 insolvent ($Y=1$) German companies. The time period ranges from 1997 to 2002 and in the case of the insolvent companies the information was gathered 2 years before the insolvency took place. The last annual report of a company before it went bankrupt receives the indicator $Y=1$ and for the rest (solvent) $Y=0$. In the original data set, the variables are labeled as $Z_{{\Greekmath 010B} }$. In order to satisfy the Assumption (A4) in [11], we need the transformation: $ X_{i{\Greekmath 010B} }=F_{n{\Greekmath 010B} }\left( Z_{i{\Greekmath 010B} }\right) $, ${\Greekmath 010B} =1,\ldots,8$, where $F_{n{\Greekmath 010B} }$ is the empirical cdf for the data $\left\{ X_{i{\Greekmath 010B} }\right\} _{i=1}^{n}$. See [4, 11] for more details of this data set. \begin{table}[th] \begin{center} \begin{tabular}{cccc} \hline\hline $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ratio No.}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Definition}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ratio No.}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ Definition}$ \\ \hline $Z_{1}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Net\_Income/Sales}$ & $Z_{5}$ & Cash/Total\_Assets \\ \hline $Z_{2}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Operating\_Income/Total\_Assets}$ & $Z_{6}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ Inventories/Sales}$ \\ \hline $Z_{3}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ebit/Total\_Assets}$ & $Z_{7}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Accounts \_Payable/Sales}$ \\ \hline $Z_{4}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Total\_Liabilities/Total\_Assets}$ & $Z_{8}$ & $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ log(Total\_Assets)}$ \\ \hline\hline \end{tabular} \end{center} \caption{Definitions of financial ratios.} \label{Tablevariable} \end{table}
Using GAM and SBK method, we clearly see via the SCCs that the shape of $ m_{2}\left( x_{2}\right) $ is linear. Figure \ref{creditestimation}(a) shows that a linear line is covered by the SCCs of $\hat{m}_{2}$. We additionally show the SCCs for another component function of log(Total\_Assets) in Figure \ref{creditestimation}(b). The SCCs do not cover a linear line. In fact, among all the $8$ financial ratio considered, only $x_{2}$ yields a linear influence. To improve the precision in statistical calibration and interpretability, we can use GAPLM with parametric $m_{2}\left( x_{2}\right) ={\Greekmath 010C} _{2}x_{2}$. \begin{figure}\label{creditestimation}
\end{figure}
For the RDC data, the in sample $\limfunc{AR}$ value obtained from GAPLM\ is $62.89\%$, which is very close to the $\limfunc{AR}$ value $63.05\%$ obtained from GAM in [11] and higher than the $\limfunc{AR}$ value $60.51\%$ obtained from SVM in [4]. To compare the prediction performance, we use the AR introduced in Example 1. Then we randomly divide the data set into $k=2$,$ 10$ folds and obtain the prediction for each observation using the rest $k-1$ folds as training set. Based on the prediction of all the observation, we can compute prediction AR value. Table \ref{Tableprediction} shows the mean and standard deviation of the prediction AR values from $100$ replications. GAPLM has higher prediction AR\ value than GAM for $99$ replications when $ k=2$ and $100$ times when $k=10$. It is clear that GAPLM has best prediction accuracy due to the better statistical calibration. \begin{table}[th] \begin{center} \begin{tabular}{ccc} \hline\hline & $k=2$ & $k=10$ \\ \hline GLM & $0.5627\left( 0.0271\right) $ & $0.5751\left( 0.00162\right) $ \\ GAM & $0.5888\left( 0.0405\right) $ & $0.6123\left( 0.00219\right) $ \\ GAPLM & $0.5928\left( 0.0408\right) $ & $0.6164\left( 0.00196\right) $ \\ \hline \end{tabular} \end{center} \caption{The mean and standard deviation (in parentheses) of AR values for GLM, GAM, GAPLM for $k$-fold Cross-validation with $k=2$ and $10$ from 1000 replications. } \label{Tableprediction} \end{table}
\section{Appendix}
\label{app}
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \renewcommand{ \thesubsection}{A.\arabic{subsection}} \renewcommand{{\thesection. \arabic{lemma}}}{A. \arabic{lemma}} \setcounter{equation}{0} \setcounter{subsection}{0} \setcounter{lemma}{0}
\subsection{Preliminaries}
In the proofs that follow, we use \textquotedblleft ${\mathcal{U}}$\textquotedblright\ and \textquotedblleft ${\scriptstyle \mathcal{U}}$\textquotedblright\ to denote sequences of random variables that are uniformly \textquotedblleft ${\mathcal{O}}$\textquotedblright\ and \textquotedblleft ${\scriptstyle \mathcal{O}}$ \textquotedblright\ of certain order. Denote the theoretical inner product of $b_{J}$ and $1$ with respect to the ${\Greekmath 010B} $ -th marginal density $f_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) $ as $\ c_{J,{\Greekmath 010B} }=\left\langle b_{J}\left( X_{{\Greekmath 010B} }\right) ,1\right\rangle $ $ =\int b_{J}\left( x_{{\Greekmath 010B} }\right) f_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) dx_{{\Greekmath 010B} }$ and define the centered B spline basis $b_{J,{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) $\ and the standardized B spline basis $B_{J,{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) $ as \begin{equation*} b_{J,{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) =b_{J}\left( x_{{\Greekmath 010B} }\right) - \frac{c_{J,{\Greekmath 010B} }}{c_{J-1,{\Greekmath 010B} }}b_{J-1}\left( x_{{\Greekmath 010B} }\right) ,B_{J,{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) =\frac{b_{J,{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) }{\left\Vert b_{J,{\Greekmath 010B} }\right\Vert _{2}},1\leq J\leq N+1, \end{equation*} so that ${\mbox{\rm E}} B_{J,{\Greekmath 010B} }\left( X_{{\Greekmath 010B} }\right) =0$, ${\mbox{\rm E}} B_{J,{\Greekmath 010B} }^{2}\left( X_{{\Greekmath 010B} }\right) =1$. Theorem A.2 in [20] shows that under Assumptions (A1)-(A5) and (A7), constants $c_{0}\left( f\right) $, $C_{0}(f)$ , $c_{1}\left( f\right) $ and $C_{1}(f)$ exist depending on the marginal densities $f_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) ,1\leq {\Greekmath 010B} \leq d,$ such that $c_{0}\left( f\right) H\leq c_{J,{\Greekmath 010B} }\leq C_{0}\left( f\right) H$, \begin{equation} c_{1}\left( f\right) H\leq \left\Vert b_{J,{\Greekmath 010B} }\right\Vert _{2}^{2}\leq C_{1}(f)H. \label{EQ: bJalpha} \end{equation}
\begin{lemma} \label{LEM:splineapprox}([1], p.149) For any $m\in C^{1}\left[ 0,1\right] $ with $m^{\prime }\in \limfunc{Lip}\left( \left[ 0,1\right] ,C_{\infty }\right) $, there exist a constant $C_{\infty }>0$ and a function $g\in G_{n}^{\left( 0\right) }\left[ 0,1\right] $ such that $\left\Vert g-m\right\Vert _{\infty }\leq C_{\infty }H^{2}.$ \end{lemma}
\subsection{Oracle estimators
}
\textsc{Proof of Theorem \ref{THM:betatilde-beta}. }(i) According to the Mean Value Theorem, a vector $\mathbf{\bar{{\Greekmath 010C}}}$ between $\mathbf{{\Greekmath 010C} }$ and $\mathbf{\tilde{{\Greekmath 010C}}}$ exists such that $\left( \mathbf{\tilde{{\Greekmath 010C}}}- \mathbf{{\Greekmath 010C} }\right) {\Greekmath 0272} ^{2}\tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\bar{{\Greekmath 010C}}}\right) ={\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{{\Greekmath 010C}}}\right) -{\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{{\Greekmath 010C} }\right) =-{\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{ {\Greekmath 010C} }\right) $ since ${\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{ \tilde{{\Greekmath 010C}}}\right) =\mathbf{0}$, where \begin{equation*} -{\Greekmath 0272} ^{2}\tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\bar{{\Greekmath 010C}}}\right) =n^{-1}\sum\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{\bar{{\Greekmath 010C}}^{ {\top}}T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \mathbf{T}_{i}\mathbf{T} _{i}^{\top }>c_{b}c_{\mathbf{Q}}\mathbf{I}_{d_{1}\times d_{1}} \end{equation*} with $c_{b}>0$ according to (A2), and then the infeasible estimator is $ \mathbf{\tilde{{\Greekmath 010C}}}=\func{argmax}_{\mathbf{a}\in \mathbb{R}^{1+d_{1}}} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{a}\right) .$ \begin{equation*} {\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{{\Greekmath 010C} }\right) =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ Y_{i}\mathbf{T}_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} ^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \mathbf{T }_{i}\right] =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}. \end{equation*} We have $\left\vert n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{T} _{i},\mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}\right\vert ={\mathcal{O}} _{a.s}\left( n^{-1/2}\log n\right) $ by Bernstein's Inequality as Lemma A.2 in [11], so \begin{equation*} \left\vert \mathbf{\tilde{{\Greekmath 010C}}}-\mathbf{{\Greekmath 010C} }\right\vert ={\mathcal{O}} _{a.s.}\left( n^{-1/2}\log n\right) \end{equation*} according to $\mathbf{\tilde{{\Greekmath 010C}}}-\mathbf{{\Greekmath 010C} }=-\left\{ {\Greekmath 0272} ^{2} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\bar{{\Greekmath 010C}}}\right) \right\} ^{-1}{\Greekmath 0272} \tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{{\Greekmath 010C} }\right) $. Then \begin{equation*} {\Greekmath 0272} ^{2}\tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\bar{{\Greekmath 010C}}}\right) \overset{a.s.}{\rightarrow }{\Greekmath 0272} ^{2}\tilde{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{{\Greekmath 010C} }\right) =-n^{-1}\sum_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{{\Greekmath 010C} ^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \mathbf{T }_{i}\mathbf{T}_{i}^{\top }, \end{equation*} which converges to $-{\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{ X}\right) \right\} \mathbf{TT}^{\top }$ almost surely at the rate of $ n^{-1/2}\log n$. So \begin{equation*} \left\vert \mathbf{\tilde{{\Greekmath 010C}}}-\mathbf{{\Greekmath 010C} }-\left[ {\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\} \mathbf{TT}^{\top } \right] ^{-1}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}\right\vert ={\mathcal{O}} _{a.s.}\left( n^{-1}\left( \log n\right) ^{2}\right) . \end{equation*} Since $n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X} _{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}\overset{\tciLaplace }{\rightarrow }N\left( \mathbf{0},a\left( {\Greekmath 011E} \right) \left[ {\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\} \mathbf{TT}^{\top }\right] ^{-1}\right) $ by central limit theorem, so Theorem \ref{THM:betatilde-beta} (i) is proved by Slutsky's theorem.
(ii) The proof is trivial based on the properties of empirical likelihood ratio for generalized linear model, see Theorem 3.2 in [15] and Corollary 1 in [7].
$\square $
\subsection{Spline backfitted kernel estimators}
In this section, we present the proofs of Theorems \ref{THM:mhat-mtilde}, \ref{THM:bands} and \ref{THM:betahat-beta}. We write any $g\in G_{n}^{0}$ as $g=\mathbf{{\Greekmath 0115} }^{\top}\mathbf{B}\left( \mathbf{X}_{i}\right) $ with vector $\mathbf{{\Greekmath 0115} }_{g}\mathbf{=}\left( {\Greekmath 0115} _{J,{\Greekmath 010B} }\right) _{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}^{\top}\in \mathbb{R}^{\left( N+1\right) d_{2}}$ is the dimension of the additive spline space $G_{n}^{0}$ , and \begin{equation*} \mathbf{B}\left( \mathbf{x}\right) =\left\{ B_{1,1}\left( x_{1}\right) ,\ldots,B_{N+1,1}\left( x_{1}\right) ,\ldots,B_{1,d_{2}}\left( x_{d_{2}}\right), \ldots,B_{N+1,d_{2}}\left( x_{d_{2}}\right) \right\} ^{\top}. \end{equation*} Denote $\mathbf{B}\left( \mathbf{t},\mathbf{x}\right) =\left\{ 1,t_{1},\ldots,t_{d_{1}},B_{1,1}\left( x_{1}\right) ,\ldots,B_{N+1,1}\left( x_{1}\right) ,\ldots,B_{1,d_{2}}\left( x_{d_{2}}\right),\ldots,B_{N+1,d_{2}}\left( x_{d_{2}}\right) \right\} ^{\top} $, \newline $\mathbf{{\Greekmath 0115} =}\left( \mathbf{\mathbf{{\Greekmath 0115} }_{{\Greekmath 010C} }^{\top}},\mathbf{ \mathbf{{\Greekmath 0115} }_{g}^{\top}}\right) ^{\top}\mathbf{=}\left( {\Greekmath 0115} _{0},{\Greekmath 0115} _{k},{\Greekmath 0115} _{J,{\Greekmath 010B} }\right) _{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2},1\leq k\leq d_{1}}^{\top}\in \mathbb{R}^{N_{d}}$ with $ N_{d}=1+d_{1}+\left( N+1\right) d_{2}$ and \begin{equation*} \hat{L}\left( \mathbf{\mathbf{{\Greekmath 0115} }_{{\Greekmath 010C} },}g\right) =\hat{L}\left( \mathbf{{\Greekmath 0115} }\right) =n^{-1}\sum_{i=1}^{n}\left[ Y_{i}\left\{ \mathbf{ {\Greekmath 0115} }^{\top}\mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b\left\{ \mathbf{{\Greekmath 0115} }^{\top}\mathbf{B}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) \right\} \right] , \end{equation*} which yields the gradient and Hessian formulae \begin{eqnarray*} {\Greekmath 0272} \hat{L}\left( \mathbf{{\Greekmath 0115} }\right) &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ Y_{i}\mathbf{B}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) -b^{\prime }\left\{ \mathbf{{\Greekmath 0115} }^{\top}\mathbf{B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right] , \\ {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{{\Greekmath 0115} }\right) &=&-n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{{\Greekmath 0115} } ^{\top}\mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) ^{\top}. \end{eqnarray*} The multivariate function $m\left( \mathbf{t,x}\right) $ is estimated by \begin{eqnarray*} \hat{m}\left( \mathbf{t,x}\right) &=&\hat{{\Greekmath 010C}}_{0}+\mathop{\textstyle \sum } \nolimits_{k=1}^{d_{1}}\hat{{\Greekmath 010C}}_{k}t_{k}+\mathop{\textstyle \sum }\nolimits_{{\Greekmath 010B} =1}^{d_{2}}\hat{m}_{{\Greekmath 010B} }\left( x_{{\Greekmath 010B} }\right) =\mathbf{\hat{{\Greekmath 0115}}} ^{\top}\mathbf{B}\left( \mathbf{t},\mathbf{x}\right) , \\ \mathbf{\hat{{\Greekmath 0115}}} &\mathbf{=}&\mathbf{\left( \mathbf{\mathbf{\hat{ {\Greekmath 0115}}}_{{\Greekmath 010C} }^{\top}},\mathbf{\mathbf{\hat{{\Greekmath 0115}}}_{g}^{\top}}\right) }^{^{\top}}\mathbf{=}\left( \mathbf{\hat{{\Greekmath 010C}}}^{\top},\mathbf{\mathbf{\hat{ {\Greekmath 0115}}}_{g}^{\top}}\right) ^{\top}\mathbf{=}\left( \hat{{\Greekmath 010C}}_{k},\hat{ {\Greekmath 0115}}_{J,{\Greekmath 010B} }\right) _{0\leq k\leq d_{1},1\leq {\Greekmath 010B} \leq d_{2},1\leq J\leq N+1}^{\top}=\func{argmax}_{\mathbf{{\Greekmath 0115} }}\hat{L}\left( \mathbf{{\Greekmath 0115} }\right) . \end{eqnarray*} Lemma 14 of Stone (1986) ensures that with probability approaching $1$, $ \mathbf{\hat{{\Greekmath 0115}}}$ exists uniquely and that ${\Greekmath 0272} \hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}}\right) \mathbf{=0}$. In addition, Lemma \ref {LEM:splineapprox} and (A1) provide a vector $\mathbf{\bar{{\Greekmath 0115}}=}\left( \mathbf{{\Greekmath 010C} }^{\top},\mathbf{\mathbf{\bar{{\Greekmath 0115}}}_{g}^{\top}}\right) ^{\top}$ and an additive spline function $\bar{m}$ such that \begin{equation} \bar{m}\left( \mathbf{x}\right) =\mathbf{\mathbf{\bar{{\Greekmath 0115}}}_{g}^{\top}B} \left( \mathbf{x}\right) ,\left\Vert \bar{m}-m\right\Vert _{\infty }\leq C_{\infty }H^{2}. \label{DEF: mbar lamdabar} \end{equation} We first establish technical lemmas before proving Theorems \ref {THM:mhat-mtilde} and \ref{THM:betahat-beta}.
\begin{lemma} \label{LEM:Ltildederiv}Under Assumptions (A1)-(A6) and (A8), as $ n\rightarrow \infty $ \begin{eqnarray*} \left\vert {\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) \right\vert &=& {\mathcal{O}}_{a.s.}\left( H^{2}+n^{-1/2}\log n\right) , \\ \left\Vert {\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) \right\Vert &=& {\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) . \end{eqnarray*} \end{lemma}
\noindent \textsc{Proof. }See supplement.$
\square $\textsc{\ }
Define the following matrices: \begin{eqnarray*} \mathbf{V} &=&{\mbox{\rm E}}\mathbf{B}\left( \mathbf{T,X}\right) \mathbf{B}\left( \mathbf{T,X}\right) ^{\top},\mathbf{S}=\mathbf{V}^{-1}, \\ \mathbf{V}_{n} &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\mathbf{B}\left( \mathbf{T} _{i},\mathbf{X}_{i}\right) \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X} _{i}\right) ^{\top},\mathbf{S}_{n}=\mathbf{V}_{n}^{-1}, \end{eqnarray*} \begin{equation*} \mathbf{V}_{b}={\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T,X}\right) \right\} \mathbf{B}\left( \mathbf{T,X}\right) \mathbf{B}\left( \mathbf{T,X} \right) ^{\top}=\left[ \begin{array}{ccc} v_{b,00} & v_{b,0,k} & v_{b,0,J,{\Greekmath 010B} } \\ v_{b,0,k^{\prime }} & v_{b,k,k^{\prime }} & v_{b,J,{\Greekmath 010B} ,k^{\prime }} \\ v_{b,0,J^{\prime },{\Greekmath 010B} ^{\prime }} & v_{b,J^{\prime },{\Greekmath 010B} ^{\prime },k} & v_{b,J,{\Greekmath 010B} ,J^{\prime },{\Greekmath 010B} ^{\prime }} \end{array} \right] _{N_{d}\times N_{d}} \end{equation*} where $N_{d}=\left( N+1\right) d_{2}+1+d_{1}$, and \begin{equation} \mathbf{S}_{b}=\mathbf{V}_{b}^{-1}=\left[ \begin{array}{ccc} s_{b,00} & s_{b,0,k} & s_{b,0,J,{\Greekmath 010B} } \\ s_{b,0,k^{\prime }} & s_{b,k,k^{\prime }} & s_{b,J,{\Greekmath 010B} ,k^{\prime }} \\ s_{b,0,J^{\prime },{\Greekmath 010B} ^{\prime }} & s_{b,J^{\prime },{\Greekmath 010B} ^{\prime },k} & _{b,J,{\Greekmath 010B} ,J^{\prime },{\Greekmath 010B} ^{\prime }} \end{array} \right] _{N_{d}\times N_{d}}, \label{DEF:VbSb} \end{equation} For any vector $\mathbf{{\Greekmath 0115} }\in \mathbb{R}^{N_{d}}$, denote \begin{equation*} \mathbf{V}_{b}\left( \mathbf{{\Greekmath 0115} }\right) ={\mbox{\rm E}} b^{\prime \prime }\left\{ \mathbf{{\Greekmath 0115} }^{\top}\mathbf{B}\left( \mathbf{T,X}\right) \right\} \mathbf{B}\left( \mathbf{T,X}\right) \mathbf{B}\left( \mathbf{T,X}\right) ^{\top},\mathbf{S}_{b}\left( \mathbf{{\Greekmath 0115} }\right) =\mathbf{V} _{b}^{-1}\left( \mathbf{{\Greekmath 0115} }\right) \end{equation*} \begin{equation} \mathbf{V}_{n,b}\left( \mathbf{{\Greekmath 0115} }\right) =-{\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{{\Greekmath 0115} }\right) ,\mathbf{S}_{n,b}\left( \mathbf{{\Greekmath 0115} }\right) = \mathbf{V}_{n,b}^{-1}\left( \mathbf{{\Greekmath 0115} }\right) . \label{DEF:VbSblamda} \end{equation}
\begin{lemma} \label{LEM:gamsinv}Under Assumptions (A2) and (A4), \begin{eqnarray*} c_{\mathbf{V}}\mathbf{I}_{N_{d}} &\leq &\mathbf{V}\leq C_{\mathbf{V}}\mathbf{ I}_{N_{d}},c_{\mathbf{S}}\mathbf{I}_{N_{d}}\leq \mathbf{S}\leq C_{\mathbf{S}} \mathbf{I}_{N_{d}}, \\ c_{\mathbf{V,}b}\mathbf{I}_{N_{d}} &\leq &\mathbf{V}_{b}\leq C_{\mathbf{V,}b} \mathbf{I}_{N_{d}},c_{\mathbf{S,}b}\mathbf{I}_{N_{d}}\leq \mathbf{S}_{b}\leq C_{\mathbf{S,}b}\mathbf{I}_{N_{d}}. \end{eqnarray*} Under Assumption (A2), (A4), (A5) and (A8), as $n\rightarrow \infty $ with probability increasing to $1$ \begin{eqnarray*} c_{\mathbf{V}}\mathbf{I}_{N_{d}} &\leq &\mathbf{V}_{n}\left( \mathbf{{\Greekmath 0115} }\right) \leq C_{\mathbf{V}}\mathbf{I}_{N_{d}},c_{\mathbf{S}}\mathbf{I} _{N_{d}}\leq \mathbf{S}_{n}\left( \mathbf{{\Greekmath 0115} }\right) \leq C_{\mathbf{S} }\mathbf{I}_{N_{d}} \\ c_{\mathbf{V,}b}\mathbf{I}_{N_{d}} &\leq &\mathbf{V}_{n,b}\left( \mathbf{ {\Greekmath 0115} }\right) \leq C_{\mathbf{V,}b}\mathbf{I}_{N_{d}},c_{\mathbf{S,}b} \mathbf{I}_{N_{d}}\leq \mathbf{S}_{n,b}\left( \mathbf{{\Greekmath 0115} }\right) \leq C_{\mathbf{S,}b}\mathbf{I}_{N_{d}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{eqnarray*} \end{lemma}
\noindent \textsc{Proof.} Using Lemma A.7 in [12] and boundness of function $ b^{\prime }$.
$\square $
Define three vectors $\mathbf{\Phi }_{b},\mathbf{\Phi }_{v},\mathbf{\Phi } _{r}$ as \begin{eqnarray*} \mathbf{\Phi }_{b} &=&\left( \Phi _{b,J,{\Greekmath 010B} }\right) _{0\leq k\leq d_{1},1\leq {\Greekmath 010B} \leq d_{2},1\leq J\leq N+1}^{\top} \\ &=&-\mathbf{S}_{b}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \right] \mathbf{B }\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) , \end{eqnarray*} \begin{eqnarray*} \mathbf{\Phi }_{v} &=&\left( \Phi _{v,J,{\Greekmath 010B} }\right) _{0\leq k\leq d_{1},1\leq {\Greekmath 010B} \leq d_{2},1\leq J\leq N+1}^{\top} \\ &=&-\mathbf{S}_{b}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ {\Greekmath 011B} \left( \mathbf{ T}_{i},\mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\right] \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) , \\ \mathbf{\Phi }_{r} &=&\left( \Phi _{r,J,{\Greekmath 010B} }\right) _{0\leq k\leq d_{1},1\leq {\Greekmath 010B} \leq d_{2},1\leq J\leq N+1}^{\top} \\ &=&\mathbf{\mathbf{\hat{{\Greekmath 0115}}}-\mathbf{\bar{{\Greekmath 0115}}}-\Phi }_{b}-\mathbf{ \Phi }_{v}. \end{eqnarray*}
\begin{lemma} \label{LEM: lamda phai}Under Assumptions (A1)-(A6) and (A8), as $ n\rightarrow \infty $ \begin{eqnarray} \left\Vert \mathbf{\hat{{\Greekmath 0115}}}-\mathbf{\bar{{\Greekmath 0115}}}\right\Vert &=&{\mathcal{O}} _{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) , \label{EQ:lamdahat-lamdabar} \\ \left\Vert \mathbf{\Phi }_{r}\right\Vert &=&{\mathcal{O}}_{p}\left( H^{-3/2}n^{-1}\log n\right) , \label{EQ:Phi_rbd} \\ \left\Vert \mathbf{\Phi }_{b}\right\Vert &=&{\mathcal{O}}_{a.s.}\left( H^{2}\right) ,\left\Vert \mathbf{\Phi }_{v}\right\Vert ={\mathcal{O}}_{a.s.}\left( H^{-1/2}n^{-1/2}\log n\right) . \notag \end{eqnarray} \end{lemma}
\noindent \textsc{Proof. }See supplement.$
\square $
\begin{lemma} \label{LEM:mhat-m}Under Assumptions (A1)-(A6) and (A8), as $n\rightarrow \infty $ \begin{eqnarray*} \left\Vert \hat{m}-\bar{m}\right\Vert _{2,n}+\left\Vert \hat{m}-\bar{m} \right\Vert _{2} &=&{\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) , \\ \left\Vert \hat{m}-m\right\Vert _{2,n}+\left\Vert \hat{m}-m\right\Vert _{2} &=&{\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) . \end{eqnarray*} \end{lemma}
\noindent \textsc{Proof.} Lemma \ref{LEM:gamsinv} implies \begin{eqnarray*} \left\Vert \hat{m}-\bar{m}\right\Vert _{2,n}+\left\Vert \hat{m}-\bar{m} \right\Vert _{2} &\leq &2C_{\mathbf{V}}\left\Vert \mathbf{\mathbf{\hat{ {\Greekmath 0115}}}}_{\mathbf{g}}-\mathbf{\mathbf{\bar{{\Greekmath 0115}}}_{g}}\right\Vert \\ &=&{\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) . \end{eqnarray*} The Lemma follows $\left\Vert \bar{m}-m\right\Vert _{\infty }+\left\Vert \bar{m}-m\right\Vert _{2}+\left\Vert \bar{m}-m\right\Vert _{2,n}={\mathcal{O}}\left( H^{2}\right) $ by (\ref{DEF: mbar lamdabar}).
$\square $
\noindent \textsc{Proof of Theorem \ref{THM:mhat-mtilde}.} According to (\ref {DEF:lhat}) and the Mean Value Theorem, a $\bar{m}_{\limfunc{K},1}\left( x_{1}\right) $ between $\hat{m}_{\func{SBK},1}\left( x_{1}\right) $ and $ \tilde{m}_{\limfunc{K},1}\left( x_{1}\right) $ exists such that \begin{equation*} \hat{\ell}_{m_{1}}^{\prime }\left\{ \hat{m}_{\func{SBK},1}\left( x_{1}\right) ,x_{1}\right\} -\hat{\ell}^{\prime }\left\{ \tilde{m}_{\limfunc{ K},1}\left( x_{1}\right) ,x_{1}\right\} =\hat{\ell}_{m_{1}}^{\prime \prime }\left( \bar{m}_{\limfunc{K},1}\left( x_{1}\right) ,x_{1}\right) \left\{ \hat{m}_{\func{SBK},1}\left( x_{1}\right) -\tilde{m}_{\limfunc{K},1}\left( x_{1}\right) \right\} , \end{equation*} Then according to $\hat{\ell}_{m_{1}}^{\prime }\left\{ \hat{m}_{\func{SBK} ,1}\left( x_{1}\right) ,x_{1}\right\} =0$, one has \begin{equation*} \hat{m}_{\func{SBK},1}\left( x_{1}\right) -\tilde{m}_{\limfunc{K},1}\left( x_{1}\right) =-\frac{\hat{\ell}_{m_{1}}^{\prime }\left\{ \tilde{m}_{\limfunc{ K},1}\left( x_{1}\right) ,x_{1}\right\} }{\hat{\ell}_{m_{1}}^{\prime \prime }\left\{ \bar{m}_{\limfunc{K},1}\left( x_{1}\right) ,x_{1}\right\} }. \end{equation*} The theorem then follows Lemmas A.15 and A.16 in [11] with small modification including variable $\mathbf{T}$.
\noindent \textsc{Proof of Theorem \ref{THM:bands}.} It follows Theorem \ref {THM:mhat-mtilde} and the same proof of Theorem 1 in [25].
$\square $
\noindent \textsc{Proof of Theorem \ref{THM:betahat-beta}. }See supplement.$
\square $
\section{SUPPLEMENTARY MATERIALS}
\noindent \textbf{Supplement to \textquotedblleft Statistical Inference for Generalized Additive Partially Linear Model\textquotedblright }: Supplement containing theoretical proof of Lemmas A.2, A.4 and Theorem 4 referenced in the main article.
\noindent \textbf{gaplmsbk.R}: R-package containing code to perform SBK estimation for component functions in generalized additive partially linear model available on https://github.com.
\begin{thebibliography}{99} \bibitem{D01} de Boor, C. (2001), \textit{A Practical Guide to Splines}, Springer-Verlag, New York.
\bibitem{H1989} H\"{a}rdle, W. (1989), \textquotedblleft Asymptotic Maximal Deviation of M-smoothers,\textquotedblright\ \textit{Journal of Multivariate Analysis}, 29, 163--179.
\bibitem{HMM1998} H\"{a}rdle, W., Mammen, E., and M\"{u}ller, M. (1998), \textquotedblleft Testing Parametric versus Semiparametric Modelling in Generalized Linear Models,\textquotedblright\ \textit{Journal of the American Statistical Association}, 93, 1461--1474.
\bibitem{HMM2010} H\"{a}rdle, W., Hoffmann, L., and\ Moro, R. (2011), \textquotedblleft Learning Machines Supporting Bankruptcy Prediction,\textquotedblright\ \textit{Statistical Tools in Finance and Insurance} (2nd ed.), Cizek, H\"{a}rdle, Weron, Springer Verlag.
\bibitem{HH2016} H\"{a}rdle, W. and Huang, L. (2013), \textquotedblleft Analysis of Deviance in Generalized Partially Linear Models,\textquotedblright\ \textit{Journal of Business and Economic Statistics, }resubmit, available at https://sfb649.wiwi.hu-berlin.de/papers/pdf/SFB649DP2013-028.pdf.
\bibitem{HT90} Hastie, T. J., and Tibshirani, R. J. (1990), \textit{ Generalized Additive Models}. Chapman and Hall, London.
\bibitem{K1994} Kolaczyk, E. (1994), \textquotedblleft Empirical Likelihood for Generalized Linear Models,\textquotedblright\ \textit{Statistica Sinica} , 4, 199--218.
\bibitem{L2009} Liang, H., Qin, Y., Zhang, X. and Ruppert, D. (2009), \textquotedblleft Empirical-Likelihood-Based Inferences for Generalized Partially Linear Models,\textquotedblright\ \textit{Scandinavian Journal of Statistics}, 36, 433--443.
\bibitem{LN95} Linton, O. B., and Nielsen, J. P. (1995), \textquotedblleft A Kernel Method of Estimating Structured Nonparametric Regression based on Marginal Integration,\textquotedblright\ \textit{Biometrika}, 82, 93--100.
\bibitem{LY10} Liu, R., and Yang, L. (2010), \textquotedblleft Spline-Backfitted Kernel Smoothing of Additive Coefficient Model,\textquotedblright\ \textit{Econometric Theory}, 26, 29--59.
\bibitem{LYH13} Liu, R., and Yang, L., and\ H\"{a}rdle, W. (2013). \textquotedblleft Oracally Efficient Two-step Estimation of Generalized Additive Model,\textquotedblright\ \textit{Journal of the American Statistical Association}, 108, 619--631.
\bibitem{MY11} Ma, S., and Yang, L. (2011), \textquotedblleft Spline-Backfitted Kernel Smoothing of Partially Linear Additive Model,\textquotedblright\ \textit{Journal of Statistical Planning and Inference}, 141, 204--219.
\bibitem{MCLX2015} Ma, S., Carroll, R. J., Liang, H. and Xu, S. (2015a), \textquotedblleft Estimation and inference in generalized additive coefficient models for nonlinear interactions with high-dimensional covariates,\textquotedblright\ \textit{The Annals of Statistics}, 43, 2102--2131.
\bibitem{MR15} Ma, S., Racine, S. and Yang, L. (2015b), \textquotedblleft Spline Regression in the Presence of Categorical Predictors,\textquotedblright\ \textit{Journal of Applied Econometrics}, 30, 705--717.
\bibitem{OWEN2001} Owen, A. (2001), \textit{Empirical likelihood}. Chapman \& Hall/Crc, London.
\bibitem{PMHB09} Park, B., Mammen, E., H\"{a}rdle, W., and Borak, S. (2009), \textquotedblleft Time Series Modelling with Semiparametric Factor Dynamics,\textquotedblright\ \textit{Journal of the American Statistical Association}, 104, 284--298.
\bibitem{SS1994} Severini, T., and Staniswalis, J. (1994), \textquotedblleft Quasi-Likelihood Estimation in Semiparametric Models,\textquotedblright\ \textit{Journal of the American Statistical Association}, 89, 501--511.
\bibitem{S85} Stone, C. J. (1985), \textquotedblleft Additive Regression and Other Nonparametric Models,\textquotedblright\ \textit{The Annals of Statistics}, 13, 689--705.
\bibitem{S86} Stone, C. J. (1986), \textquotedblleft The Dimensionality Reduction Principle for Generalized Additive Models,\textquotedblright\ \textit{The Annals of Statistics}, 14, 590--606.
\bibitem{WY07} Wang, L., and Yang, L. (2007), \textquotedblleft Spline-Backfitted Kernel Smoothing of Nonlinear Additive Autoregression Model,\textquotedblright\ \textit{The Annals of Statistics}, 35, 2474--2503.
\bibitem{WL11} Wang, L., Liu X., Liang, H., and Carroll, R. J. (2011), \textquotedblleft Estimation and Variable Selection for Generalized Additive Partial Linear Models,\textquotedblright\ \textit{The Annals of Statistics}, 39, 1827--1851.
\bibitem{XY06a} Xue, L., and Yang, L. (2006), \textquotedblleft Additive Coefficient Modeling via Polynomial Spline,\textquotedblright\ \textit{ Statistica Sinica}, 16, 1423--1446.
\bibitem{XL10} Xue, L., and Liang, H. (2010), \textquotedblleft Polynomial Spline Estimation for a Generalized Additive Coefficient Model,\textquotedblright\ \textit{Scandinavian Journal of Statistics}, 37, 26--46.
\bibitem{YSH03} Yang, L., Sperlich, S., and H\"{a}rdle, W. (2003), \textquotedblleft Derivative Estimation and Testing in Generalized Additive Models,\textquotedblright\ \textit{Journal of Statistical Planning and Inference}, 115, 521--542.
\bibitem{ZLYW16} Zheng, S., Liu, R., Yang, L. and H\"{a}rdle, W. (2016), \textquotedblleft Statistical Inference for Generalized Additive Models: Simultaneous Confidence Corridors and Variable Selection,\textquotedblright\ \textit{TEST}, 25, 607--626. \end{thebibliography}
\setcounter{page}{1}
\begin{center} {\large Supplement to \textquotedblleft Statistical Inference for Generalized Additive Partially Linear Model\textquotedblright}
\end{center}
\noindent \textbf{Proof of Lemma A.2 }\textsc{\ } \begin{eqnarray*} {\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ Y_{i}\mathbf{B}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) -b^{\prime }\left\{ \mathbf{\bar{{\Greekmath 0115}}}^{\top } \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \mathbf{B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right] \\ &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ b^{\prime }\left\{ m\left( \mathbf{T} _{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} +{\Greekmath 011B} \left( \mathbf{X} _{i}\right) {\Greekmath 0122} _{i}\right] \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X }_{i}\right) \end{eqnarray*} The first $\left( 1+d_{1}\right) $ elements of the above vector is \begin{equation*} n^{-1}\sum_{i=1}^{n}\left[ \left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T} _{i},\mathbf{X}_{i}\right) \right\} \right] +{\Greekmath 011B} \left( \mathbf{X} _{i}\right) {\Greekmath 0122} _{i}\right] T_{ik},0\leq k\leq d_{1}, \end{equation*} with $T_{i0}=1$. These elements are ${\mathcal{O}}_{a.s.}\left( H^{2}+n^{-1/2}\log n\right) $ according to (\ref{DEF: mbar lamdabar}). The other elements can be written as \begin{equation*} n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ {\Greekmath 0118} _{i,J,{\Greekmath 010B} ,n}+{\mbox{\rm E}}\left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \right] B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) +{\Greekmath 011B} \left( \mathbf{X}_{i}\right) {\Greekmath 0122} _{i}B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right] , \end{equation*} where ${\Greekmath 0118} _{i,J,{\Greekmath 010B} ,n}$ is \begin{equation*} \left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i},\mathbf{X} _{i}\right) \right\} \right] B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) -{\mbox{\rm E}} \left[ \left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) \right\} \right] B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right] . \end{equation*} According to (\ref{EQ: bJalpha}) and (\ref{DEF: mbar lamdabar}), one has \begin{align*} & \left\vert {\mbox{\rm E}}\left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) \right\} \right] B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert \\ & \leq {\mbox{\rm E}}\left\vert b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) \right\} \right\vert \frac{\left\vert b_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert }{\left\Vert b_{J,{\Greekmath 010B} }\right\Vert _{2}} \\ & \leq c\left\Vert m-\bar{m}\right\Vert _{\infty }\mathrm{max}_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\left\Vert b_{J,{\Greekmath 010B} }\right\Vert _{2}^{-1} \mathrm{max}_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}{\mbox{\rm E}}\left\vert b_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert \\ & ={\mathcal{O}}\left( H^{2}\times H^{-1/2}\times H\right) ={\mathcal{O}}\left( H^{5/2}\right) , \end{align*} for some constant $c$ and likewise for any $p\geq 2$ \begin{align*} & {\mbox{\rm E}}\left\vert b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) \right\} \right\vert ^{p}\left\vert B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert ^{p} \\ & \leq \left( cH^{5/2}\right) ^{p-2}{\mbox{\rm E}}\left\vert b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \right\vert ^{2}\frac{ b_{J,{\Greekmath 010B} }^{2}\left( X_{i{\Greekmath 010B} }\right) }{\left\Vert b_{J,{\Greekmath 010B} }\right\Vert _{2}^{2}}, \end{align*} and \begin{align*} & {\mbox{\rm E}}\left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( X_{i{\Greekmath 010B} }\right) \right\} \right] ^{2}B_{J,{\Greekmath 010B} }^{2}\left( X_{i{\Greekmath 010B} }\right) \\ & \leq c\left\Vert m-\bar{m}\right\Vert _{\infty }^{2}\mathrm{max}_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\left\Vert b_{J,{\Greekmath 010B} }\right\Vert _{2}^{-2}\mathrm{max}_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}{\mbox{\rm E}}\left\vert b_{J,{\Greekmath 010B} }^{2}\left( X_{i{\Greekmath 010B} }\right) \right\vert ={\mathcal{O}}\left( H^{4}\right) . \end{align*} Using these bounds and applying Lemma A.2 in [11], one has $\left\vert n^{-1}\sum_{i=1}^{n}{\Greekmath 0118} _{i,J,{\Greekmath 010B} ,n}\right\vert ={\mathcal{O}}_{a.s.}\left( H^{2}n^{-1/2}\log n\right) $ and \begin{equation*} n^{-1}\left\vert \mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{X}_{i}\right) {\Greekmath 0122} _{i}B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert ={\mathcal{O}} _{a.s.}\left( n^{-1/2}\log n\right) . \end{equation*} The lemma is then proved.
$\square $
\noindent \textbf{Proof of Lemma A.4 }The Mean Value Theorem implies that an $N_{d}\times N_{d}$ diagonal matrix $\mathbf{t}$ exists whose diagonal elements are in $\left[ 0,1\right] $, such that for $\mathbf{\hat{{\Greekmath 0115}}} ^{\ast }=\mathbf{t\hat{{\Greekmath 0115}}+}\left( \mathbf{I}_{N_{d}}-\mathbf{t}\right) \mathbf{\bar{{\Greekmath 0115}}}$ \begin{equation*} {\Greekmath 0272} \hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}}\right) -{\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) ={\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\hat{{\Greekmath 0115}} }^{\ast }\right) \left( \mathbf{\hat{{\Greekmath 0115}}-\bar{{\Greekmath 0115}}}\right) . \end{equation*} Since, as noted before, that ${\Greekmath 0272} \hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}} \right) =\mathbf{0}$, the above equation becomes \begin{equation*} \mathbf{\hat{{\Greekmath 0115}}-\bar{{\Greekmath 0115}}}=-\left\{ {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}}^{\ast }\right) \right\} ^{-1}{\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) . \end{equation*} According to (\ref{DEF:VbSblamda}), \begin{equation*} -{\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{{\Greekmath 0115} }\right) =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{{\Greekmath 0115} }^{ {\top}}\mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \mathbf{B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \mathbf{B}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) ^{\top}=\mathbf{V}_{n,b}\left( \mathbf{{\Greekmath 0115} } \right) , \end{equation*} Lemma \ref{LEM:gamsinv} implies that with probability approaching $1$ \begin{equation*} c_{\mathbf{V},b}\mathbf{I}_{N_{d}}\leq -{\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{ \hat{{\Greekmath 0115}}}^{\ast }\right) \leq C_{\mathbf{V},b}\mathbf{I}_{N_{d}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation*} Then (\ref{EQ:lamdahat-lamdabar}) follows Lemma \ref{LEM:Ltildederiv}. Furthermore, $\left\Vert \mathbf{\hat{{\Greekmath 0115}}}^{\ast }-\mathbf{\bar{{\Greekmath 0115}} }\right\Vert ={\mathcal{O}}_{a.s}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) $ as well according to $\mathbf{\hat{{\Greekmath 0115}}}^{\ast }$'s definition. Note that Taylor expansion ensures that for any vector $\mathbf{a}\in \mathbb{R} ^{N_{d}}$ \begin{equation*} \mathbf{a}^{\top}\left\{ {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}} ^{\ast }\right) -{\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) \right\} \mathbf{a\leq }\left\Vert b^{\prime \prime \prime }\right\Vert _{\infty }\mathrm{max}_{1\leq i\leq n}\left\vert \mathbf{\hat{{\Greekmath 0115}}} ^{\ast {\top}}\mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) -\mathbf{ \bar{{\Greekmath 0115}}}^{\top}\mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\vert \mathbf{a}^{\top}\mathbf{V}_{n}\mathbf{a} \end{equation*} while by Cauchy Schwartz inequality \begin{align*} & \mathrm{max}_{1\leq i\leq n}\left\vert \mathbf{\hat{{\Greekmath 0115}}}^{\ast {\top}} \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) -\mathbf{\bar{{\Greekmath 0115}}} ^{\top}\mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\vert \leq \left\Vert \mathbf{\hat{{\Greekmath 0115}}}^{\ast }-\mathbf{\bar{{\Greekmath 0115}}} \right\Vert \mathrm{max}_{1\leq i\leq n}\left\Vert \mathbf{B}\left( \mathbf{T }_{i},\mathbf{X}_{i}\right) \right\Vert \\ & ={\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) \times {\mathcal{O}} _{p}\left( H^{-1/2}\right) ={\mathcal{O}}_{p}\left( H+H^{-1}n^{-1/2}\log n\right) . \end{align*} Consequently, one has the following bound on the difference of two Hessian matrices \begin{equation*} \mathrm{sup}_{\mathbf{a}\in R^{N_{d}}}\left\Vert \left( {\Greekmath 0272} ^{2}\hat{L} \left( \mathbf{\hat{{\Greekmath 0115}}}^{\ast }\right) -{\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) \right) \mathbf{a}\right\Vert \left\Vert \mathbf{a}\right\Vert ^{-1}={\mathcal{O}}_{p}\left( H+H^{-1}n^{-1/2}\log n\right) . \end{equation*} Denote next \begin{eqnarray*} \mathbf{\hat{d}} &=&-\left\{ {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}} ^{\ast }\right) \right\} ^{-1}{\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}} \right) =\mathbf{\hat{{\Greekmath 0115}}-\bar{{\Greekmath 0115}}} \\ \mathbf{\bar{d}} &=&-\left\{ {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}} \right) \right\} ^{-1}{\Greekmath 0272} \hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) \end{eqnarray*} then $\left\Vert \mathbf{\hat{d}}\right\Vert ={\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) $ and so is $\left\Vert \mathbf{\bar{d} }\right\Vert $ by similar arguments. Furthermore, \begin{equation*} {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\hat{{\Greekmath 0115}}}^{\ast }\right) \left( \mathbf{\hat{d}-\bar{d}}\right) \mathbf{=}\left\{ {\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\bar{{\Greekmath 0115}}}\right) -{\Greekmath 0272} ^{2}\hat{L}\left( \mathbf{\hat{{\Greekmath 0115}} }^{\ast }\right) \right\} \mathbf{\bar{d}} \end{equation*} entails that \begin{eqnarray*} \left\Vert \mathbf{\hat{d}-\bar{d}}\right\Vert &=&{\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) \times {\mathcal{O}}_{p}\left( H+H^{-1}n^{-1/2}\log n\right) \\ &=&{\mathcal{O}}_{p}\left( H^{5/2}+H^{-3/2}n^{-1}\log ^{2}n\right) . \end{eqnarray*} Denote \begin{equation*} \mathbf{\tilde{d}}=\left[ n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \mathbf{B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \mathbf{B}\left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) ^{\top}\right] ^{-1}{\Greekmath 0272} \hat{L}\left( \mathbf{\bar{ {\Greekmath 0115}}}\right) . \end{equation*} Using similar calculations, one can show that \begin{eqnarray*} \left\Vert \mathbf{\tilde{d}-\bar{d}}\right\Vert &=&{\mathcal{O}}_{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) \times {\mathcal{O}}_{a.s}\left( H^{2}\right) \\ &=&{\mathcal{O}}_{a.s.}\left( H^{7/2}+H^{3/2}n^{-1/2}\log n\right) , \end{eqnarray*} \begin{eqnarray*} \left\Vert \mathbf{\tilde{d}-\Phi }_{b}-\mathbf{\Phi }_{v}\right\Vert &=&{\mathcal{O}} _{a.s.}\left( H^{3/2}+H^{-1/2}n^{-1/2}\log n\right) \times {\mathcal{O}}_{a.s}\left( H^{-1/2}n^{-1/2}\log n\right) \\ &=&{\mathcal{O}}_{a.s.}\left( Hn^{-1/2}\log n+H^{-1}n^{-1}\log ^{2}n\right) , \end{eqnarray*} Putting together the above proves (\ref{EQ:Phi_rbd}). Lastly, almost surely \begin{eqnarray*} \left\Vert \mathbf{\Phi }_{b}\right\Vert &=&\left\Vert \mathbf{S} _{b}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ b^{\prime }\left\{ m\left( \mathbf{T }_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \right] \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\Vert \\ &\leq &C_{\mathbf{S,}b}\left\Vert n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ b^{\prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \bar{m}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \right] \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\Vert ={\mathcal{O}}_{a.s.}\left( H^{2}\right) \end{eqnarray*} and \begin{eqnarray*} \left\Vert \mathbf{\Phi }_{v}\right\Vert &=&\left\Vert \mathbf{S} _{b}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ {\Greekmath 011B} \left( \mathbf{T}_{i}, \mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\right] \mathbf{B}\left( \mathbf{T} _{i},\mathbf{X}_{i}\right) \right\Vert \\ &\leq &C_{\mathbf{S},b}\left\Vert n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ {\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\right] \mathbf{B}\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\Vert ={\mathcal{O}} _{a.s.}\left( H^{-1/2}n^{-1/2}\log ^{2}n\right) , \end{eqnarray*} which completes the proof of the lemma. $
\square $
\noindent \textbf{Proof of Theorem 4 }(i) The Mean Value Theorem implies the existence of $\mathbf{\breve{{\Greekmath 010C}}}$ between $\mathbf{\hat{{\Greekmath 010C}}}$ and $ \mathbf{\tilde{{\Greekmath 010C}}}$ such that $\left( \mathbf{\hat{{\Greekmath 010C}}}-\mathbf{ \tilde{{\Greekmath 010C}}}\right) =-\left\{ {\Greekmath 0272} ^{2}\hat{\ell}_{\mathbf{{\Greekmath 010C} } }\left( \mathbf{\breve{{\Greekmath 010C}}}\right) \right\} ^{-1}{\Greekmath 0272} \hat{\ell}_{ \mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{{\Greekmath 010C}}}\right) ,$where \begin{equation*} -{\Greekmath 0272} ^{2}\hat{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\breve{{\Greekmath 010C}}}\right) =n^{-1}\sum_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{\check{{\Greekmath 010C}}}^{\top } \mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \mathbf{T}_{i}\mathbf{T }_{i}^{\top }>c_{b}\mathbf{I}_{d_{1}\times d_{1}} \end{equation*} according to Assumption (A6). We have \begin{eqnarray} {\Greekmath 0272} \hat{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{{\Greekmath 010C}}}\right) &=&\left\{ \frac{\partial \hat{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{ {\Greekmath 010C}}}\right) }{\partial {\Greekmath 010C} _{k}}\right\} _{k=0}^{d_{1}}={\Greekmath 0272} \hat{\ell }_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{{\Greekmath 010C}}}\right) -{\Greekmath 0272} \tilde{\ell} _{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{{\Greekmath 010C}}}\right) \label{EQ: lbeta} \\ &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ b^{\prime }\left\{ \mathbf{\tilde{ {\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{\tilde{{\Greekmath 010C}}^{\top }T}_{i}+\hat{m}\left( \mathbf{X} _{i}\right) \right\} \right] \mathbf{T}_{i}. \notag \end{eqnarray} So for a given $0\leq k\leq d_{1}$, \begin{align*} \frac{\partial \hat{\ell}_{\mathbf{{\Greekmath 010C} }}\left( \mathbf{\tilde{{\Greekmath 010C}}} \right) }{\partial {\Greekmath 010C} _{k}}& =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left[ b^{\prime }\left\{ \mathbf{\tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \mathbf{\tilde{{\Greekmath 010C}}^{\top }T}_{i}+ \hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] T_{ik} \\ & =n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{\tilde{ {\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ m\left( \mathbf{X}_{i}\right) -\hat{m}\left( \mathbf{X}_{i}\right) \right\} T_{ik} \\ & +{\mathcal{O}}\left[ n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left\{ m\left( \mathbf{X} _{i}\right) -\hat{m}\left( \mathbf{X}_{i}\right) \right\} ^{2}T_{ik}\right] \\ & =I_{k}+{\mathcal{O}}_{a.s.}\left( H^{3}+H^{-2}n^{-1}\log n\right) , \end{align*} by Lemma \ref{LEM:mhat-m}, where $I_{k}=I_{k1}+I_{k2},$ \begin{equation*} I_{k1}=n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{ \tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ m\left( \mathbf{X}_{i}\right) -\bar{m}\left( \mathbf{X}_{i}\right) \right\} T_{ik}, \end{equation*} \begin{equation*} I_{k2}=n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{ \tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ \bar{m}\left( \mathbf{X}_{i}\right) -\hat{m}\left( \mathbf{X}_{i}\right) \right\} T_{ik}. \end{equation*} According to Lemma \ref{LEM:splineapprox}, $I_{k1}={\mathcal{O}}_{a.s.}\left( H^{2}\right) $, while \begin{eqnarray*} I_{k2} &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{ \tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\left( \hat{{\Greekmath 0115}} _{J,{\Greekmath 010B} }-\bar{{\Greekmath 0115}}_{J,{\Greekmath 010B} }\right) B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\} T_{ik} \\ &=&I_{k2,b}+I_{k2,v}+I_{k2,r} \end{eqnarray*} where \begin{eqnarray*} I_{k2,b} &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{ \tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\Phi _{b,J,{\Greekmath 010B} }B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\} T_{ik}, \\ I_{k2,v} &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{ \tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\Phi _{v,J,{\Greekmath 010B} }B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\} T_{ik}, \\ I_{k2,r} &=&n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ \mathbf{ \tilde{{\Greekmath 010C}}^{\top }T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\Phi _{r,J,{\Greekmath 010B} }B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\} T_{ik}. \end{eqnarray*} We have \begin{eqnarray*} \left\vert I_{k2,b}\right\vert &\leq &C_{b}n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\left\vert \Phi _{b,J,{\Greekmath 010B} }\right\vert \left\vert B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert \right\} T_{ik} \\ &\leq &C_{\mathbf{Q}}C_{b}\left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\Phi _{b,J,{\Greekmath 010B} }^{2}\right\} ^{1/2}\times \left[ 1+\mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\left\{ n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}\left\vert B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\vert \right\} ^{2}\right] ^{1/2} \\ &=&C_{\mathbf{Q}}C_{b}\times {\mathcal{O}}_{a.s.}\left( N_{d}^{1/2}H^{5/2}\right) \times \left\{ {\mathcal{O}}_{a.s.}\left( N+1\right) \times d_{2}\times {\mathcal{O}} _{a.s.}\left( H\right) \right\} \\ &=&{\mathcal{O}}_{a.s.}\left( H^{2}\right) ={\scriptstyle \mathcal{O}}_{a.s.}\left( n^{-1/2}\right) . \end{eqnarray*} according to (\ref{EQ:lamdahat-lamdabar}). Similarly, \begin{equation*} \left\vert I_{k2,r}\right\vert ={\mathcal{O}}_{p}\left( N_{d}H^{7/2}+N_{d}H^{-1/2}n^{-1}\log n\right) ={\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) . \end{equation*} We have $I_{k2,v}=\widetilde{I}_{k2,v}+{\mathcal{O}}_{a.s.}\left( n^{-1/2}\right) \times {\mathcal{O}}_{a.s.}\left( N_{d}^{1/2}n^{-1/2}\log n\right) \times {\mathcal{O}}\left( N\right) $, where \begin{equation*} \widetilde{I}_{k2,v}=n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ m\left( \mathbf{T}_{i},\mathbf{X}_{i}\right) \right\} \left\{ \mathop{\textstyle \sum }\nolimits_{1\leq J\leq N+1,1\leq {\Greekmath 010B} \leq d_{2}}\Phi _{v,J,{\Greekmath 010B} }B_{J,{\Greekmath 010B} }\left( X_{i{\Greekmath 010B} }\right) \right\} T_{ik} \end{equation*} \begin{equation*} =-n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}b^{\prime \prime }\left\{ m\left( \mathbf{T} _{i},\mathbf{X}_{i}\right) \right\} n^{-1}\mathop{\textstyle \sum }\nolimits_{i^{\prime }=1}^{n}{\Greekmath 011B} \left( \mathbf{T}_{i^{\prime }},\mathbf{X}_{i^{\prime }}\right) {\Greekmath 0122} _{i^{\prime }}\mathbf{B}^{\top }\left( \mathbf{X} _{i^{\prime }}\right) \mathbf{S}_{b\_\mathbf{{\Greekmath 010C} }}\mathbf{B}\left( \mathbf{X}_{i}\right) T_{ik} \end{equation*} where $\mathbf{B}\left( \mathbf{x}\right) =\left\{ B_{1,1}\left( x_{1}\right) ,\ldots,B_{N+1,d_{2}}\left( x_{d_{2}}\right) \right\} ^{\top }$ and $\mathbf{S}_{b\_\mathbf{{\Greekmath 010C} }}$ consists of columns $2+d_{1}$ to $ N_{d} $ of $\mathbf{S}_{b}$ defined in (\ref{DEF:VbSb}). $\widetilde{I} _{k2,v}={\scriptstyle \mathcal{O}}_{a.s.}\left( n^{-1/2}\right) $ by calculation similarly to the proof of Theorem 5 in [11]. Putting the above together, one has \begin{equation*} \left\vert \mathbf{\hat{{\Greekmath 010C}}}-\mathbf{\tilde{{\Greekmath 010C}}}\right\vert ={\scriptstyle \mathcal{O}} _{p}\left( n^{-1/2}\right) . \end{equation*}
\noindent (ii) According to Section 11.2 in [15], \begin{equation} w_{i}=\frac{1}{n}\frac{1}{1+{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,}\frac{1}{n}\mathop{\textstyle \sum } \nolimits_{i=1}^{n}\frac{Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) }{1+{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) }=0, \label{EQ:elquation} \end{equation} where $Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) =\left[ Y_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}=$ ${\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X}_{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}.$
$n^{-1}\mathop{\textstyle \sum }\nolimits_{i=1}^{n}{\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X} _{i}\right) {\Greekmath 0122} _{i}\mathbf{T}_{i}\overset{\tciLaplace }{\rightarrow }N\left( \mathbf{0},a\left( {\Greekmath 011E} \right) \left[ {\mbox{\rm E}} b^{\prime \prime }\left\{ m\left( \mathbf{T},\mathbf{X}\right) \right\} \mathbf{TT}^{\top }\right] ^{-1}\right) $ by central limit theorem and \begin{equation*} \mathrm{max}_{1\leq i\leq n}\left\vert {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\vert =o_{p}\left( 1\right) . \end{equation*} So \begin{eqnarray*} -2\log \tilde{R}\left( \mathbf{{\Greekmath 010C} }\right) &=&-2\Sigma _{i=1}^{n}\log \left( nw_{i}\right) \\ &=&2\Sigma _{i=1}^{n}\log \left[ 1+{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right] \\ &=&2\Sigma _{i=1}^{n}\left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} -\Sigma _{i=1}^{n}\left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} } \right) \right\} ^{2}+2\Sigma _{i=1}^{n}{\Greekmath 0111} _{i} \end{eqnarray*} where ${\Greekmath 0111} _{i}={\mathcal{O}}_{p}\left( \left\{ {\Greekmath 0115} ^{\top }\left( \mathbf{{\Greekmath 010C} } \right) Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{3}\right) $ with \begin{equation*} \left\vert \Sigma _{i=1}^{n}{\Greekmath 0111} _{i}\right\vert \leq C\left\Vert {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\right\Vert ^{3}\Sigma _{i=1}^{n}\left\Vert Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\Vert ^{3}={\scriptstyle \mathcal{O}} _{p}\left( 1\right) . \end{equation*} Similarly, $-2\log \hat{R}\left( \mathbf{{\Greekmath 010C} }\right) =2\Sigma _{i=1}^{n}\left\{ \hat{{\Greekmath 0115}}^{\top }\left( \mathbf{{\Greekmath 010C} }\right) \hat{Z} _{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} -\Sigma _{i=1}^{n}\left\{ \hat{ {\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\hat{Z}_{i}\left( \mathbf{ {\Greekmath 010C} }\right) \right\} ^{2}$ with \begin{equation*} \hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) =\left[ Y_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}. \end{equation*} So the difference
\begin{eqnarray*} &&-2\log \hat{R}\left( \mathbf{{\Greekmath 010C} }\right) +2\log \tilde{R}\left( \mathbf{ {\Greekmath 010C} }\right) \\ &=&2\Sigma _{i=1}^{n}\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} -\Sigma _{i=1}^{n}\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\hat{Z} _{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{2}+ \\ &&-2\Sigma _{i=1}^{n}\left\{ {\Greekmath 0115} ^{\top }\left( \mathbf{{\Greekmath 010C} }\right) Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} +\Sigma _{i=1}^{n}\left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} } \right) \right\} ^{2}+{\scriptstyle \mathcal{O}}_{p}\left( 1\right) \\ &=&2I_{1}+I_{2}+{\scriptstyle \mathcal{O}}_{p}\left( 1\right) \end{eqnarray*} with \begin{eqnarray*} I_{1} &=&\Sigma _{i=1}^{n}\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) -{\Greekmath 0115} \left( \mathbf{ {\Greekmath 010C} }\right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} , \\ I_{2} &=&\Sigma _{i=1}^{n}\left[ \left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} } \right) ^{\top }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{2}-\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\hat{Z}_{i}\left( \mathbf{ {\Greekmath 010C} }\right) \right\} ^{2}\right] . \end{eqnarray*} Rewrite \begin{equation*} I_{1}=\Sigma _{i=1}^{n}\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{{\top} }\left\{ \hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) -Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} +\Sigma _{i=1}^{n}\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{ {\Greekmath 010C} }\right) -{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{{\top} }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) , \end{equation*} then \begin{eqnarray*} &&\Sigma _{i=1}^{n}\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\left\{ \hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) -Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} \\ &=&\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\Sigma _{i=1}^{n}\left\{ \hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) -Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} \\ &=&\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top }\Sigma _{i=1}^{n}\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T} _{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i} \\ &\leq &\left\Vert \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{{\top} }\right\Vert \left\Vert \Sigma _{i=1}^{n}\left[ b^{\prime }\left\{ \mathbf{ {\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}\right\Vert \\ &=&{\mathcal{O}}_{p}\left( n^{-1/2}\right) {\scriptstyle \mathcal{O}}_{p}\left( n^{1/2}\right) ={\scriptstyle \mathcal{O}}_{p}\left( 1\right) \end{eqnarray*} following $\left\Vert \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{{\top} }\right\Vert ={\mathcal{O}}_{p}\left( n^{-1/2}\right) $ and \begin{equation*} \Sigma _{i=1}^{n}\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T} _{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{ {\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}={\scriptstyle \mathcal{O}}_{p}\left( n^{1/2}\right) \end{equation*} from the proof for (\ref{EQ: lbeta}). Denote \begin{eqnarray*} \hat{S}\left( \mathbf{{\Greekmath 010C} }\right) &=&\frac{1}{n}\Sigma _{i=1}^{n}\left[ Y_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] ^{2}\mathbf{T}_{i}\mathbf{T} _{i}{}^{\top }, \\ S\left( \mathbf{{\Greekmath 010C} }\right) &=&\frac{1}{n}\Sigma _{i=1}^{n}\left[ Y_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \right] ^{2}\mathbf{T}_{i}\mathbf{T} _{i}{}^{\top }. \end{eqnarray*} According to Section 11.2 in [15], \begin{eqnarray*} \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) &=&\hat{S}^{-1}\left( \mathbf{ {\Greekmath 010C} }\right) n^{-1}\Sigma _{i=1}^{n}\hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} } \right) +{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) , \\ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) &=&S^{-1}\left( \mathbf{{\Greekmath 010C} }\right) n^{-1}\Sigma _{i=1}^{n}Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) +{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) . \end{eqnarray*} We have \begin{eqnarray*} &&\hat{S}\left( \mathbf{{\Greekmath 010C} }\right) -S\left( \mathbf{{\Greekmath 010C} }\right) \\ &=&\frac{1}{n}\Sigma _{i=1}^{n}\left\{ \left[ Y_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] ^{2}-\left[ Y_{i}-b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top } \mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} \right] ^{2}\right\} \mathbf{T}_{i}\mathbf{T}_{i}{}^{\top }+{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) \\ &=&\frac{1}{n}\Sigma _{i=1}^{n}\left[ 2Y_{i}-b^{\prime }\left\{ \mathbf{ {\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X} _{i}\right) \right\} \right] \\ &&\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top } \mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T }_{i}\mathbf{T}_{i}{}^{\top }+{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) \\ &=&\frac{1}{n}\Sigma _{i=1}^{n}\left[ 2Y_{i}-2b^{\prime }\left\{ \mathbf{ {\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} +b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X} _{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T} _{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \\ &&\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top } \mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T }_{i}\mathbf{T}_{i}{}^{\top }+{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) \\ &=&\frac{1}{n}\Sigma _{i=1}^{n}\left[ 2{\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X }_{i}\right) {\Greekmath 0122} _{i}+b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top } \mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \\ &&\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top } \mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T }_{i}\mathbf{T}_{i}{}^{\top }+{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) \\ &=&\frac{1}{n}\Sigma _{i=1}^{n}2{\Greekmath 011B} \left( \mathbf{T}_{i},\mathbf{X} _{i}\right) {\Greekmath 0122} _{i}\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top } \mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X}_{i}\right) \right\} \right] \mathbf{T}_{i}\mathbf{T}_{i}{}^{\top } \\ &&+\frac{1}{n}\Sigma _{i=1}^{n}\left[ b^{\prime }\left\{ \mathbf{{\Greekmath 010C} } ^{\top }\mathbf{T}_{i}+m\left( \mathbf{X}_{i}\right) \right\} -b^{\prime }\left\{ \mathbf{{\Greekmath 010C} }^{\top }\mathbf{T}_{i}+\hat{m}\left( \mathbf{X} _{i}\right) \right\} \right] ^{2}\mathbf{T}_{i}\mathbf{T}_{i}{}^{\top }+{\scriptstyle \mathcal{O}} _{p}\left( n^{-1/2}\right) \\ &=&{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) +{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) ++{\scriptstyle \mathcal{O}} _{p}\left( n^{-1/2}\right) ={\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) . \end{eqnarray*} So
\begin{eqnarray*} &&\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) -{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} } \right) \\ &=&\hat{S}^{-1}\left( \mathbf{{\Greekmath 010C} }\right) n^{-1}\Sigma _{i=1}^{n}\hat{Z} _{i}\left( \mathbf{{\Greekmath 010C} }\right) -S^{-1}\left( \mathbf{{\Greekmath 010C} }\right) n^{-1}\Sigma _{i=1}^{n}Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \\ &=&\hat{S}^{-1}\left( \mathbf{{\Greekmath 010C} }\right) n^{-1}\Sigma _{i=1}^{n}\left\{ \hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) -Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} +\left\{ \hat{S}^{-1}\left( \mathbf{{\Greekmath 010C} }\right) -S^{-1}\left( \mathbf{{\Greekmath 010C} }\right) \right\} n^{-1}\Sigma _{i=1}^{n}Z_{i}\left( \mathbf{ {\Greekmath 010C} }\right) \\ &=&{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) . \end{eqnarray*} Then \begin{eqnarray*} &&\Sigma _{i=1}^{n}\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) -{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{\top}Z_{i}\left( \mathbf{ {\Greekmath 010C} }\right) \\ &=&\left\{ \hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) -{\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{\top}\Sigma _{i=1}^{n}Z_{i}\left( \mathbf{ {\Greekmath 010C} }\right) \\ &=&{\scriptstyle \mathcal{O}}_{p}\left( n^{-1/2}\right) {\mathcal{O}}_{p}\left( n^{1/2}\right) ={\scriptstyle \mathcal{O}}_{p}\left( 1\right) , \end{eqnarray*} and $I_{1}={\scriptstyle \mathcal{O}}_{p}\left( 1\right) $. \begin{eqnarray*} I_{2} &=&\Sigma _{i=1}^{n}\left[ \left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} } \right) ^{\top}Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{2}-\left\{ \hat{ {\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top}\hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} ^{2}\right] \\ &=&\Sigma _{i=1}^{n}\left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{{\top} }Z_{i}\left( \mathbf{{\Greekmath 010C} }\right) +\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} } \right) ^{\top}\hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} \left\{ {\Greekmath 0115} \left( \mathbf{{\Greekmath 010C} }\right) ^{\top}Z_{i}\left( \mathbf{{\Greekmath 010C} } \right) -\hat{{\Greekmath 0115}}\left( \mathbf{{\Greekmath 010C} }\right) ^{\top}\hat{Z}_{i}\left( \mathbf{{\Greekmath 010C} }\right) \right\} \\ &=&{\scriptstyle \mathcal{O}}_{p}\left( 1\right) . \end{eqnarray*} by similar proof. Putting together, we have \begin{equation*} -2\log \hat{R}\left( \mathbf{{\Greekmath 010C} }\right) +2\log \tilde{R}\left( \mathbf{ {\Greekmath 010C} }\right) ={\scriptstyle \mathcal{O}}_{p}\left( 1\right) . \end{equation*} $
\square $
\end{document} | arXiv |
\begin{document}
\newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem} \newtheorem{cor}{Corollary} \newtheorem{defn}{Definition} \newtheorem{remark}{Remark}
\newcommand{\newcommand}{\newcommand} \newcommand{\Levy}{L\'{e}vy} \newcommand{\Holder}{H\"{o}lder} \newcommand{\cadlag}{c\`{a}dl\`{a}g} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\ba}{\begin{eqnarray}} \newcommand{\ea}{\end{eqnarray}} \newcommand{\la}{\label} \newcommand{\nn}{\nonumber} \newcommand{\Z}{\mathbb{Z}} \newcommand{\C}{\mathbb{C}} \newcommand{\E}{\mathbb{E}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\prob}{\mathbb{P}} \newcommand{\Skor}{\mathbb{D}} \newcommand{\PP}{\mathcal{P}} \newcommand{\M}{\mathcal{M}} \newcommand{\law}{\stackrel{\mathcal{L}}{\rightarrow}} \newcommand{\eqd}{\stackrel{\mathcal{L}}{=}} \newcommand{\vp}{\varphi} \newcommand{\Vp}{\Phi} \newcommand{\veps}{\varepsilon} \newcommand{\eps}{\ve} \newcommand{\qref}[1]{(\ref{#1})} \newcommand{\D}{\partial} \newcommand{\dnto}{\downarrow} \newcommand{\nsup}{^{(n)}} \newcommand{\ksup}{^{(k)}} \newcommand{\jsup}{^{(j)}} \newcommand{\nksup}{^{(n_k)}} \newcommand{\inv}{^{-1}} \newcommand{\one}{\mathbf{1}} \newcommand{\argmin}{\mathrm{arg}^+\mathrm{min}} \newcommand{\argmax}{\mathrm{arg}^+\mathrm{max}} \newcommand{\Rplus}{\R_+} \newcommand{\Rorder}{\R_<} \newcommand{\xx}{\mathbf{x}} \newcommand{\emp}{\mu} \newcommand{\empN}{F^n} \newcommand{\lossN}{L^n} \newcommand{\Lip}{\mathrm{Lip}} \newcommand{\BL}{\mathrm{BL}} \newcommand{\ddm}{d} \newcommand{\dif}{D}
\title{Concentration inequalities for the hydrodynamic limit of a two-species stochastic particle system }
\author{Joseph Klobusicky\footnote{Department of Mathematics, The University of Scranton, Scranton PA, 18510 (\texttt{[email protected]}).}} \date{} \maketitle
\newcommand{\no}{\textcolor{red}{[CLUNK!] }}
\begin{abstract} We study a stochastic particle system which is motivated from grain boundary coarsening in two-dimensional networks. Each particles lives on the positive real line and is labeled as belonging to either Species 1 or Species 2. Species 1 particles drift at unit speed toward the origin, while Species 2 particles do not move. When a particle in Species 1 hits the origin, it is removed, and a randomly selected particle mutates from Species 2 to Species 1. The process described is an example of a high-dimensional piecewise deterministic Markov process (PDMP), in which deterministic flow is punctuated with stochastic jumps. Our main result is a proof of exponential concentration inequalities of the Kolmogorov-Smirnoff distance between empirical measures of the particle system and solutions of limiting nonlinear kinetic equations. Our method of proof involves a time and space discretization of the kinetic equations, which we compare with the particle system to derive recurrence inequalities for comparing total numbers in small intervals. To show these recurrences occur with high probability, we appeal to a state dependent Hoeffding type inequality at each time increment. \end{abstract}
\textbf{Keywords}: piecewise deterministic Markov process, concentration inequality, kinetic theory, grain boundary coarsening, functional law of large numbers
\section{Introduction}
An important topic in material science is the coarsening of network microstructures such as polycrystalline metals and soap froths.
Through heating of metals or gas diffusion of foams, coarsening
is induced from the migration of network boundaries to minimize interfacial surface energy. While tracking individual boundaries remains a multifaceted and active field of research in numerical \cite{esedouglu2018algorithms} and geometric \cite{ilmanen2019short} analysis, in the 1950's von Neumann \cite{von1952discussion}
and Mullins \cite{mul56} proved a simple relation between the topology and geometry of a single cell in two-dimensional networks with isotropic surface tension evolving through curve-shortening flow.
Specifically, a cell with area $A$ and $n$ sides has a constant growth rate \begin{equation} \frac {dA}{dt} = c(n-6), \label{nminus6} \end{equation} where $c$ is a material constant. When a cell with fewer than six sides shrinks to a point, neighboring cells may change their number of sides to maintain the topological requirement that exactly three edges meet at at each junction. Therefore,
a grain will typically change its number of sides, and therefore its rate of
growth, several times during the coarsening process.
Several physicists \cite{fra882,flyvbjerg1993model,marder1987soap,beenakker1986evolution} used (\ref{nminus6}) as a starting point in writing kinetic equations for densities $u_n(a,t)$ of cells having $n$ sides ($n$-gons) and area $a$ at time $t$. These take the form of constant convection transport equations with intrinsic flux terms, given by\begin{equation} \label{eq:ke-big} \partial_t u_n +c (n-6) \partial_a u_n = \sum_{l=2}^5 (l-6) u_l(0,t) \left( \sum_{m=2}^M A_{lm}(t) u_m(a,t)\right), \quad n \ge 2. \end{equation} Models diverge in their choice of the matrix $A_{lm}$, which prescribes mean field rules for how networks change topology when a grain vanishes. In \cite{klobusicky2021two}, Menon, Pego, and the author presented a stochastic particle system, the $M$-species model, as an intermediate between kinetic equations and direct simulations of grain boundary coarsening. The focus of \cite{klobusicky2021two} was with well-posedness of the limiting kinetic equations and simulations. It remained, however, to provide
estimates for convergence rates of the particle systems to their hydrodynamic limits. A study was conducted in \cite{klobusicky2016concentration} on a simplified model of one species, in which total loss of particles is shown to be equivalent to a diminishing urn process similar to Pittel's model of cannibalistic behavior \cite{pittel1987urn}.
In this paper, we build the groundwork for establishing concentration inequalities for the $M$-species model by restricting our attention to a model of two species. Specifically, each particle lives in $\mathbb R_+= [0, \infty)$ and is tagged as belonging to either Species 1 or Species 2. Particles in Species 2 do not drift, while particles in Species 1 drift at unit speed toward the origin. When a particle reaches the origin, it is removed, and a particle from Species 2 immediately mutates into Species 1. For a visual representation of the process in which mutations are represented as vertical jumps, see Fig. \ref{specfig}. The process just described can be interpreted as a minimal model of network coarsening, with the behavior of particles in Species 1 analogous to the constant area decrease of cells with fewer than six sides. The removal of Species 1 and mutation of Species 2 are similar to the vanishing of faces and subsequent reassignment of neighboring cell topologies.
The hydrodynamic limits for densities $f_j(x,t)$ of particles in Species $j$ at position $x>0$ and time $t\ge 0$ are transport equations with nonlinear intrinsic source terms, with\begin{align} \partial_t f_1(x,t)- \partial_x f_1(x,t) &= \frac{f_1(0,t)f_2(x,t)}{N_2(t)},\label{kineq1}\\ \partial_t f_2(x,t) &= -\frac{f_1(0,t)f_2(x,t)}{N_2(t)},\label{kineq2}\\ f_1(x,0) = \bar f_1(x), \quad &f_2(x,0) = \bar f_2(x),\label{inits} \end{align} where $N_j(t) = \int_0^\infty f_j(x,t)dx$ is the total number of Species $j$. To allow for nondifferentiable initial conditions, we will exclusively work with the integral form of (\ref{kineq1})-(\ref{inits}), written with Duhamel's formula as \begin{align} f_1(x,t) &= \bar f_1(x+t)+\int_0^t f_2(x+t-s,s)\frac{f_1(0,s)}{N_2(s)}ds, \label{kinint1}\\ f_2(x,t) &= \bar f_2(x)-\int_0^t f_2(x,s)\frac{f_1(0,s)}{N_2(s)}ds. \label{kinint2} \end{align}
The main result for this paper is the convergence of empirical measures of the particle system to limiting kinetic equations (\ref{kinint1})-(\ref{kinint2}). In Section \ref{sec:themodel}, we give a rigorous description of the particle system as a piecewise determinstic Markov process (PDMP), lay out a deterministic discretization of the kinetic equations, and present the main results. Section \ref{disc} gives a proof for Theorem \ref{detscheme}, which shows that the discretization converges to the kinetic equations at rate $\mathcal O(\delta+\omega(\delta,0))$, where $\delta$ is both the spatial and temporal step size of the scheme, and $\omega(\delta, 0)$ is the modulus of continuity in the initial conditions. This is achieved through writing recurrence relations which compare total numbers restricted to intervals of size $\delta$. Section \ref{part} is a proof of Theorem \ref{finalh}, an exponential concentration inequality (with respect to the initial total number of particles) between the discretization and particle system. This involves similar recurrence inequalities seen in Section \ref{disc}, but now with an added task of showing that the inequalities occur under high probability. We will need to apply a generalization of Hoeffding's inequality \cite{hoeffding1994probability}, a fundamental concentration inequality for sampling without replacement, for establishing estimates on the total number of mutations occurring in intervals.
Theorems \ref{detscheme} and \ref{finalh} can be combined to produce our main result, Theorem \ref{majorthm}, which gives a concentration inequality between empirical measures and solutions of the kinetic equation. For sufficiently small $\varepsilon > 0$ and $n>n(\varepsilon)$, the inequality takes the form \begin{align} &\mathbb P_n\left(\sup_{t\le T'}d((\mu_1(t), \mu_2(t)), (\mu^n_1(t), \mu^n_2(t)) ) \ge \varepsilon\right) \le \frac {C} {\varepsilon^2}\exp(-\tilde C\varepsilon^5n). \end{align} Here, for $j = 1,2$ and time $0\le t\le T'$, $ \mu^n_j(t)$ is the $n$-particle empirical measure for positions of Species $j$ , and $\mu_j(t,dx) = f_j(t,x)dx$ where $f_j(x,t)$ is the solution of (\ref{kinint1})-(\ref{kinint2}). The metric $d$ is a sum of Kolmogorov-Smirnov metrics between measures of each species. The constants $C$, $\tilde C$, and $T'$ all depend on the initial conditions $\bar f$.
We conclude with Section \ref{prooftheo1}, in which we derive an explicit solution of the kinetic equations and prove the well-posedness stated in Theorem \ref{solution}, relying on several well-known facts from renewal theory.
We stress that explicit solutions are not used in either the proofs of Theorems \ref{detscheme} and \ref{finalh}, as we hope to extend the methods used here to the $M$-species
model which have no known explicit solutions.
\section{Particle model and statement of results} \label{sec:themodel}
\begin{figure}
\caption{\textbf{A schematic of the particle system}. Particles in Species 1 (top line) drift toward the origin at unit speed. When a particle (shown in black) reaches the origin, it is removed. Another particle (shown in grey, with dashed outline immediately before mutation and solid outline after) is randomly selected from Species 2 (bottom line) to mutate into Species 1. }
\label{specfig}
\end{figure}
\subsection{A two-species particle system and its kinetic limit} \label{partsyskin}
We now formally define the stochastic process $\{X^n(t)\}_{t\ge 0}$ for an initial system of $n$ particles. Each particle lives in one of two ordered copies of $\mathbb R_+ = (0,\infty)$, which we refer to as Species 1 and Species 2. Since particles may be removed during the process, the state space $E^n$ consists of states \begin{equation}
\mathbf x = \{(x_i,s_i): i = 1, \dots, |\mathbf x|,\quad |\mathbf x| \le n\}, \quad \end{equation} with particle locations $x_i \in [0,\infty)$ and labels $s_i\in \{1,2\}$ denoting each particle's species. The state space can be expressed as a disjoint union of positive orthants \begin{equation} E^n = \coprod_{l+m\le n} E^n_{(l,m)}, \label{pdmpstates} \end{equation} with $E^n_{(l,m)}= \mathbb R_+^l \times \mathbb R_+^m$ denoting positions
for $l$ particles in Species 1 and $m$ particles in Species 2.
Fix an initial state $X^n(0) = \{(x_1^0,s_1^0), \dots, (x_n^0,s_n^0)\} \in E^n$, and denote $\alpha$ as an index for a particle in Species 1 closest to the origin, meaning $x_\alpha^0 \le x_i^0$ for $i = 1, \dots,
n$ and $s_\alpha^0 = 1$. Now let $\tau_1 = x_\alpha^0$ denote the time until a particle reaches the origin. Define $X^n(t) \in E^n$ for $t \in [0, \tau_1)$ deterministically by advecting particles in Species 1 toward the origin at unit speed while keeping particles in Species 2 fixed: \begin{equation}
s_i(t) = s_i^0, \quad x_i(t) = \begin{cases}x_i^0-t, & s_i^0 = 1, \\ x_i^0, & s_i^0 = 2 \\ \end{cases} , \qquad i = 1, \dots, n. \end{equation}
Randomness is introduced with a mutation at time $t = \tau_1$. At this time, the smallest particle in Species 1 has reached the origin, and is removed from the system. Furthermore, a particle $(x_\iota(\tau_1^-), 2)$ selected with uniform probability from Species 2 mutates while keeping its position, meaning \begin{equation} (x_\iota(\tau_1), s_\iota(\tau_1)) = (x_\iota(\tau_1^-), 1). \end{equation} Finally, particle indices $ i > \alpha$ decrement by one so that the index set is \{1, \dots, $n-1\}$. The (now stochastic) process then repeats deterministic drift until a particle from Species 1 reaches the origin at some time $\tau_2$, again triggering a random mutation, and the process continues until there are no particles left in Species 2. For instances in which multiple particles reach the origin simultaneously, particles in Species 2 are selected to mutate by sampling without replacement. The process $\{X(t)\}_{t \ge 0}$ is an example of the general $M$-species model, which was shown in \cite{klobusicky2021two} to be a class of piecewise deterministic Markov processes (PDMPs). The stochastic process induces a filtered probability space $(\Omega, \mathcal F, \mathbb P_n, \{\mathcal F(t)\}_{t\ge 0})$, where $\mathcal F$ is the natural filtration. Davis \cite{dav84} established that PDMPs are in fact strong Markov, which we will use in Section \ref{part} when considering events before and after certain mutation times.
At time $t \ge 0$, denote the number of particles in Species $j$ as $N_j^n(t)$, and the total number as $N^n(t) = N_1^n(t)+N_2^n(t)$. Empirical densities for species with $n$ initial particles are then defined as \begin{equation} \mu^n_j(t)= \frac 1n \sum_{i= 1}^{N^n(t)} \delta(x_i)\cdot \mathbf 1(s_i = j), \quad \quad j = 1,2. \end{equation}
The differential form of the limiting equations of the infinite particle limit $n\rightarrow \infty$ are given by equations (\ref{kineq1})-(\ref{inits}), in which for each species $j$ the limiting measures $\lim_{n\rightarrow \infty}\mu_j^n\rightarrow \mu_j$ are deterministic with densities $f_j(x,t).$ We will require that that $N_1(0)+N_2(0) = 1$ and $N_2(0)>0$. The left hand sides of (\ref{kineq1}) and (\ref{kineq2}) represent the constant drift of Species 1 toward the origin and zero drift in Species 2. The right hand sides give the intrinsic flux arising from mutations selected from a normalized density of $f_2$, occurring at a frequency of
$f_1(0,t)$. To allow for nondifferentiable initial data and solutions, we will use
the integral form (\ref{kinint1})-(\ref{kinint2}) with initial data $(\bar f_1, \bar f_2) \in Z^2$,
where $Z$ denotes the cone of positive, continuous, and locally bounded functions under the $L^1(\mathbb R_+)$ norm topology. Equations (\ref{kinint1})-(\ref{kinint2}) reach a singularity when $N_2 = 0$, corresponding to when there are no more Species 2 particles to mutate. This occurs at time \be
T(\bar f) = \sup_{t>0}\{N_2(t)>0\}. \ee
The derivation of an explicit solution of (\ref{kinint1})-(\ref{kinint2}) first relies on explicitly solving for the removal rate, which we write as $a(t) = f_1(0,t)$, and subsequently the total loss \begin{equation} L(t) = \int_0^ta(s)ds, \end{equation}which may be interpreted as an ``internal clock" to the system, counting normalized total visits to the origin or, equivalently, total mutations.
\begin{theorem}\label{solution} Let $\bar f = (\bar f_1, \bar f_2) \in Z^2 $ with $N_2(0)>0$,
(a) The removal rate $a(t) = f(0,t)$ and $N_2(t)$ may be written in terms of the initial conditions as \begin{align}\label{aexp} a(t) &= \sum_{j = 0}^\infty \left(\frac{\bar f_2}{N_2(0)}\right)^{*(j)}(t)*\bar f_1(t),\\ N_2(t) &= N_2(0)-L(t).\label{nform} \end{align} Here, an exponent of $*(j)$ denotes $j$-fold self convolution (with $f^{*(0)} =1$). For $0\le t<T(\bar f)$, the solution $(f_1(x,t), f_2(x,t)) \in Z^2$ of (\ref{kinint1})-(\ref{kinint2}) is unique and has the explicit form\begin{align} f_1(x,t)&= \bar f_1(x+t)+\int_0^t\frac{\bar f_2(x+t-s)}{N_2(0)}a(s)ds,\label{f1exp}\\ f_2(x,t) &= \frac{N_2(t)}{N_2(0)}\bar f_2(x). \label{f2exp} \end{align}
(b) For $\bar f_1, \bar f_2 \in Z$ and $0\le t < T(\bar f)$, (\ref{aexp})-(\ref{f2exp}) defines a continuous dynamical system in $Z^2$, so that the map $(\bar f_1,\bar f_2,t)\mapsto (f_1(\cdot,t),f_2(\cdot ,t))$ is in $C(Z^2\times [0,T(\bar f)], Z^2)$.
\end{theorem}
The proof for Theorem \ref{solution} is postponed until Section \ref{prooftheo1}, as neither the explicit solution nor its derivation will be used in future results. The well-posedness of (\ref{kinint1})-(\ref{kinint2}) is invoked for defining constants in the convergence analysis of Sections \ref{disc} and \ref{part}. Note, however, that the particle system in this paper is a special case of the $M$-species model developed in \cite{klobusicky2021two}, in which well-posedness is derived through a Banach fixed point argument, rather than appealing to an explicit solution.
\subsection{Discretization scheme of kinetic equations}
\label{discdetoutline} To enable us to write down recurrence inequalities involving total numbers restricted to an interval, we will
construct a deterministic scheme for (\ref{kinint1})-(\ref{kinint2}). We do so with measures $\tilde \mu_1(t, \cdot;\delta),\tilde \mu_2(t, \cdot;\delta) \in \mathcal{M}(\mathbb R_+)$ at time $t>0$ which are piecewise constant in $\delta>0$ sized time intervals $\Delta t_k = [\delta (k-1), \delta k)$ for $k \ge 1$. Note that while these measures depend on $\delta$, we will often suppress this argument in the notation for simplicity in presentation.
Initial measures are given by \begin{equation} \tilde \mu_j(t,\cdot) = \bar \mu_j, \quad t \in [0, \delta), \quad j = 1,2, \label{discdetinit} \end{equation} with the requirement that $(\bar \mu_1+\bar \mu_2)([0, \infty)) = 1$. For each time step $t_k = k\delta$, we define the incremental loss over a time interval as \begin{equation}\label{diffl} \Delta \tilde L(t_k) =\begin{cases}0, & k = 0, \\ \tilde \mu_1(t_{k-1},[0,\delta)), & k \ge 1. \\ \end{cases} \quad \end{equation} Total number for Species 2 then decreases by the incremental loss, with
\begin{equation} \tilde N_2(t_k) =\begin{cases} \tilde \mu_2(0, [0, \infty)), & k = 0, \\
\tilde N_2(t_{k-1})-\Delta\tilde L(t_{k}), & k \ge 1. \\ \end{cases} \quad \end{equation} Measures update by a shift of distance $\delta$ toward the origin in Species 1 followed by mutation in Species 2 of total number $\Delta\tilde L(t_{k})$. Therefore, for $t \in [t_{k},t_{k+1})$, $k \ge 1$, we update with \begin{align} \tilde \mu_1(t) &= S_{\delta}(\tilde \mu_2(t_{k-1}))+\frac{\Delta\tilde L(t_{k})}{\tilde N_2(t_{k-1})}\tilde \mu_2(t_{k-1}),\label{updatediscneg1}\\ \tilde \mu_2(t) &= \tilde \mu_2(t_{k-1})\left(1-\frac{\Delta\tilde L(t_{k})}{\tilde N_2(t_{k-1})}\right). \quad
\label{updatedisc0}
\end{align} Here $S_h$ is the left translation operator acting on measures, defined through the cumulative function $F_\mu$ of a measure $\mu \in \mathcal M(\mathbb R_+)$ by \begin{equation} S_h (F_\mu(x)) = F_\mu(x+h)-F_\mu(h), \quad h \ge 0 . \end{equation} Since Species 1 shifts before adding mutated particles from Species 2, we have a conserved quantity $\tilde N_1(t) = \tilde N_1(0)$, and thus the total number \begin{equation} \tilde N(t_k):= \tilde N_1(t_k)+\tilde N_2(t_k) = 1-\sum_{i = 1}^{k-1}\Delta\tilde
L(t_i). \end{equation}This scheme remains well-defined as long as $ \tilde N_2(t)>0$.
It is clear that \begin{equation} \tilde N_2(t) \ge \tilde N_2(0)-(\bar \mu_1([0,t])+\bar \mu_2[0,t]), \label{n2tildefinite} \end{equation} so for initial measures in $Z$ we are always able to find a nonzero length interval of existence $[0,T_1(\bar f))$, with \begin{equation} T_1(\bar f) = \sup\{t:\tilde N_2(t)>\tilde N_2(0)/2\}. \end{equation}
\subsection{Main results: convergence rates and concentration inequalities}
Our first main result gives a comparison between the deterministic discretization and solutions of (\ref{kinint1})-(\ref{kinint2}) using the Kolmogorov-Smirnov (KS) metric. For two measures $\nu, \eta \in \mathcal M(\mathbb R_+), $ with associated cumulative functions $F_{\nu}(x) = \nu([0,x])$ and $F_{\eta}(x) = \eta([0,x])$ the KS metric is defined as \begin{equation}
d_{KS}(\nu, \eta) = \sup_{x \in \mathbb R}|F_{\nu}(x)-F_{\eta}(x)|. \end{equation} For handling convergence of both species, we define a metric on $\mathcal M(\mathbb R_+) \times \mathcal M(\mathbb R_+)$ between $\nu = (\nu_1, \nu_2)$ and $\eta = (\eta_1,\eta_2)$ by \begin{equation} d( \nu,\eta) =d_{KS}(\nu_1, \eta_1) + d_{KS}(\nu_2, \eta_2). \label{ourmetric} \end{equation} As we are often working with measures, we define measures associated to solutions of (\ref{kinint1})-(\ref{kinint2}) by \begin{equation} \mu_j(t,dx) = f_j(x,t)dx \quad j = 1,2. \end{equation} We will also need to track the modulus of continuity for solutions. For densities $(f_1(x,t), f_2(x,t))$, we let \begin{equation}\label{moc}
\omega(\delta, t) = \sup_{x \in \mathbb R_+} \sum_{j = 1}^2|f_j(x+\delta,t)-f_j(x,t)|. \end{equation} We will impose that initial conditions have compact support for the sake of clarity, as calculations relating convergence and the decay of initial conditions can become rather technical. Since initial conditions are continuous, compact support implies that $\omega(\delta, 0) \rightarrow 0$ as $\delta\rightarrow 0$.
\begin{theorem} \label{detscheme}
Let $ \bar f = (\bar f_1, \bar f_2) \in Z^2$ have compact support with $N_2(0)>0$. For $t< T(\bar f)$, let $\mu(t) = (\mu_1(t), \mu_2(t))$ be measures for the unique solution to (\ref{kinint1})-(\ref{kinint2}). Let measures $\tilde \mu(t ) = (\tilde \mu_1(t ), \tilde \mu_2(t))$ for the discretization scheme with step size of $\delta>0$ also have initial conditions $\bar f$. Then there exist positive constants $\delta^d$ and $C^d$ such that for all $\delta \in (0, \delta^d)$ and $T' \in [0,T(\bar f))$,
\begin{equation}
\sup_{t \in [0, T']}d(\tilde \mu(t ),\mu(t))\le C_d( \delta+ \omega(\delta,0)). \label{detthm2}
\end{equation}
The constants $C^d$ and $\delta^d$ are dependent on $L^\infty$ and $L^1$ bounds of solutions in Theorem \ref{solution}, and the compact support bound $M = \sup\{x:\sum_{j = 1,2}\bar f_{j}(x) >0\}$. \end{theorem}
To set up for the next main result, we generate initial conditions $\bar \mu^n = (\bar \mu^n_1, \bar \mu^n_2)$ for the particle system with uniform spacing through the cumulative distribution functions $F_j(x) = \mu_j([0,x])$ for $j = 1,2$. The explicit particle positions are \begin{align} ( x_i(0), s_i(0)) = \begin{cases}(F_1^{-1}(i/n),1) & i = 1, \dots, \lfloor nN_1(0) \rfloor, \\ (F_2^{-1}(( nN_1(0)-i)/n),2) & i = \lfloor nN_1(0) \rfloor+1, \dots, n, \\ \end{cases} \label{stochassign} \end{align} where $F^{-1}$ is the quantile function of a cumulative function $F$. For initial conditions $(\bar f_1, \bar f_2)\in Z^2$ with $\bar \mu_j(dx) = \bar f_jdx$, it is easy to show that $d(\bar \mu^n, \bar \mu) \rightarrow 0$ in law as $n\rightarrow \infty$, and that there exists a positive integer $n_{0}(\bar f)$ such that if $n> n_{0}(\bar f)$, \begin{equation} \bar \mu_1^n([0,T_1(\bar f)])+\bar \mu_2^n([0,T_1(\bar f)]) \le 2N_2^n(0)/3 \quad \hbox{ for } n>n_{0}(\bar f), \label{nfcond} \end{equation} meaning that there is always a particle available to mutate at each jumping time in $[0, T_1(\bar f)]$, and the process therefore does not reach its cemetery state.
The next theorem, which we show in Section \ref{part},
gives an exponential concentration inequality between the deterministic discretization and the particle system.
\begin{theorem} \label{finalh}
Let $ \bar f = (\bar f_1, \bar f_2) \in Z^2$ have compact support with $N_2(0)>0$. For $t< T(\bar f)$, let $\mu^{n}(t) = (\mu_1^{n}(t), \mu_2^{n}(t))$ be empirical measures for $X^n(t)$ generated from (\ref{stochassign}) with $\bar f \in Z^2$. Let measures $\tilde \mu(t ) = (\tilde \mu_1(t ), \tilde \mu_2(t))$ for the discretization scheme with step size of $\delta>0$ have initial conditions $\bar f$. Then there exist positive constants $\delta^p, C^p, C^p_2,C_3^p$ such that for all $\delta\in (0,\delta^p)$ there exists $n^p(\delta)>0$ such that for all integers $n>n^p(\delta)$ and $T'<T(\bar f),$ \begin{equation} \mathbb P_n\left(\sup_{t\le T'}d(\tilde \mu(t ),\mu^n(t))\ge C^{p}(\delta+\omega(0,\delta)\right) \le \frac {C^{p}_2} {\delta^2}\exp(-C^{p}_3\delta^5n). \label{detthm3} \end{equation} The constants $\delta^p, C^p, C^p_2,C_3^p$ are all dependent on $L^\infty$ and $L^1$ bounds of solutions in Theorem \ref{solution} with initial conditions of $\bar f = (\bar f_1, \bar f_2)$, and the compact support bound $M = \sup\{x:\sum_{j = 1,2}\bar f_{j}(x) >0\}$. \end{theorem}
From Theorems \ref{detscheme} and \ref{finalh}, it is straightforward to obtain our main concentration inequality for initial conditions which are also locally Lipschitz.
\begin{theorem} \label{majorthm} Let $\bar f = (\bar f_1, \bar f_2) \in Z^2$ with $N_2(0)>0$ be both compactly supported and locally Lipschitz, so that $\omega(0,\delta) \le C^{\omega} \delta$. For $C^p$ and $ \delta^p$ determined from Theorem \ref{finalh}, let $\varepsilon \in (0,2C^pC^{\omega} \delta^p)$. Then there exist positive constants $C, \tilde C$ and $N(\varepsilon)>0$ such that for all integers $n>N(\varepsilon)$ and $T'<T(\bar f),$ \begin{align} &\mathbb P_n\left(\sup_{t\le T'}d(\mu(t), \mu^n(t) )\ge \varepsilon\right) \le \frac {C} {\varepsilon^2}\exp(-\tilde C\varepsilon^5n). \label{mainconc} \end{align}
\end{theorem}
\begin{proof} Since initial conditions are locally Lipschitz and compactly supported, we may replace the $C^d( \delta+ \omega(\delta,0))$ and $C^p( \delta+ \omega(\delta,0))$ terms in Theorem \ref{detscheme} and with \ref{finalh} with $\hat C^d \delta$ and $\hat C^p \delta$, respectively, where $\hat C^d = C^dC^\omega$ and $\hat C^p = C^pC^\omega$.
Let $\varepsilon\in (0,2\hat C^p\delta_p)$ and choose $\delta = \varepsilon /( 2\hat C^d)$. We then have \begin{align} &\mathbb P\left(\sup_{t\le T'}d(\mu(t), \mu^n(t)) \ge \varepsilon\right) \le \mathbb P\left(\sup_{t\le T'}d(\tilde \mu(t), \mu^n(t))+d(\mu(t), \tilde \mu(t)) \ge \varepsilon\right)\\ & \le \mathbb P\left(\sup_{t\le T'}d(\tilde \mu(t), \mu^n(t))\ge \varepsilon-\hat C^d\delta\right) = \mathbb P\left(\sup_{t\le T'}d(\tilde \mu(t), \mu^n(t))\ge \varepsilon/2\right) \nonumber\\ &\le \frac {4(\hat C^p)^2C_{2}^p} {\varepsilon^2}\exp\left(-\frac{C_{3}^p}{32(\hat C^p)^5}\varepsilon^5n\right).\nonumber \end{align} The second inequality uses Theorem \ref{detscheme}, and the third uses Theorem \ref{finalh}. We obtain (\ref{mainconc}) with $C = 4(\hat C^p)^2C_{2}^p$ and $\tilde C = C_{3}^p/(32(\hat C^p)^5)$. \end{proof} From Theorem \ref{majorthm}, an application of the Borel-Cantelli lemma then gives us a strong law of large numbers.
\begin{cor} Under the product measure $\mathbb Q = \prod_{n\ge 2} \mathbb P_n$, for $T' < T(\bar f)$, \begin{equation}
\lim_{n\rightarrow 0} \sup_{t \le T'}d( \mu^{n}(t), \mu(t)) = 0 \quad \hbox{ almost surely.} \end{equation} \end{cor}
\section{Comparison of kinetic equations and deterministic scheme}\label{disc}
In this section, we present a proof of Theorem \ref{detscheme}. The discretization scheme outlined in Section \ref{discdetoutline} allows us to write recursive formulas at time steps $t_k$ related to measures restricted to size $\delta$ intervals, which we denote as \begin{equation} I_l = [(l-1)\delta, l\delta), \quad l \ge 1. \end{equation}In Section \ref{regdisc} we collect estimates related to growth of quantities for solutions of the kinetic equations and the discretization scheme, including the modulus of continuity $\omega(\delta, t)$ and total number contained in an interval. Estimates related to the comparison between solutions of the kinetic equations and the discretization are presented in Section \ref{convest}. The main quantity of interest is the difference of total number in intervals $I_l$ at times $t_k$. Through constructing a closed recurrence inequality, we show differences are of order $\delta^2+\delta \omega(\delta, 0)$. In Section \ref{poofdettheo}, these differences are then summed over $[0,M]$ to establish Theorem \ref{detscheme}.
\subsection{Growth estimates}\label{regdisc}
Our estimates for solutions of (\ref{kinint1})-(\ref{kinint2}) and iterations (\ref{updatediscneg1})-(\ref{updatedisc0}) will make frequent use of the constant bounds\begin{align}
C_{\infty}( \bar f) &= \max_{j = 1,2}\sup_{s \in [0,T_1(\bar f)]} \|f_j(x,s)\|_{\infty}, \label{cinf}\\ C_b( \bar f)&=\max \{1/N_2(T_1(\bar f)), 1/\tilde N_2(T_1(\bar f))\}, \label{cb} \end{align} which are dependent upon the initial conditions $\bar f = (\bar f_1, \bar f_2)$ and time of existence $T(\bar f)$. That $C_{\infty}( \bar f), C_b(\bar f)$ are finite follows from well-posedness of $(f_1(x,t), f_2(x,t))$ in Theorem \ref{solution} and the existence of $T_1(\bar f)$ established from (\ref{n2tildefinite}). For simplicity, in future estimates we will refer to these constants simply as $C_\infty$ and $C_b$. As we will see in Lemma \ref{mbounds}, it will be necessary to further restrict our interval of existence to $t \in [0, T_2(\bar f)])$, with \begin{equation} T_2(\bar f) = T_1(\bar f) \wedge 1/(8C_\infty C_b). \end{equation} We will work with solutions $(f_1(x,t), f_2(x,t))$ of (\ref{kinint1})-(\ref{kinint2}) having initial conditions $\bar f_1,\bar f_2 \in Z$ with compact support in some interval $[0,M] \subset \mathbb R_+$. For the deterministic scheme, measures
have identical initial conditions as the kinetic equations, with $\bar \mu_j(dx) = \bar f_j(x)dx$.
We begin with a simple estimate on the propagation of the modulus of continuity.
\begin{lemma}\label{moclem} For all $\delta>0$ and $t \le T_2(\bar f)$, \begin{align} \omega(\delta, t) \le C_1\omega(\delta, 0), \label{moc1} \end{align} where $C_1 = 2\exp(2C_b).$ \end{lemma} \begin{proof} Since $N_2(t)$ is decreasing, we use (\ref{kinint1})-(\ref{kinint2}) to show \begin{align}
&\sum_{j = 1}^2|f_j(x+\delta,t)-f_j(x,t)|\label{mocinq}\\&\le|f_1(x+\delta+t,0)-f_1(x+t,0)|+|f_2(x+\delta,0)-f_2(x,0)|\nonumber\\
&+C_b \int_0^t|f_2(x+\delta+t-s,s)-f_2(x+t-s,s)|+|f_2(x+\delta,s)-f_2(x,s)|dL(s),\nonumber \end{align} where $L(s) = \int_0^t f_1(0,s)ds$ is the total loss. By taking the supremum of the above inequality over $x\in \mathbb R_+$, from the definition of $\omega(\delta, t)$ given in (\ref{moc}),\begin{align} \omega(\delta,t) \le 2\omega(\delta,0)+ 2C_b\int_0^t\omega(\delta,s) dL(s). \end{align} From Gronwall's inequality, \begin{equation} \omega(\delta,t) \le2\exp(2C_bL(t))\omega(\delta,0). \end{equation} Since $L(t) \le 1$, we obtain (\ref{moc1}). \end{proof}
Next, we turn to studying the maximum total number of a measure on length $\delta$ intervals, denoted as \begin{equation}
m^j_\delta(t) = \sup_{\substack{I:|I|= \delta}} \mu_j(t,I), \quad j= 1,2, \qquad
m_\delta(t) =\sum_{j = 1}^2 m^j_\delta(t). \label{mdef} \end{equation}
We define $\tilde m_\delta(t)$ similarly.
\begin{lemma}\label{mbounds}For $\delta>0$ and $t<T_2(\bar f)$, \begin{align}
m_\delta(t)&\le C_2\delta ,\label{mlem1}\\ \tilde m_\delta(t_k)&\le \tilde C_2 \delta, \label{mlem2}\\ \tilde N(t_k)&\ge 1-\tilde C_2 \delta k . \label{mlem3} \end{align} With constants \begin{equation} C_2 =2C_\infty \exp(C_b), \quad \tilde C_2 = \frac{8C_\infty^2}{1- 4C_bC_\infty T_2(\bar f)}. \end{equation} \end{lemma} \begin{proof}
To show (\ref{mlem1}), for an interval $I$ with $|I| \le \delta$, integrate
(\ref{kinint1})-(\ref{kinint2}) over $I$ to obtain \begin{align} \mu_1(t,I)&= \mu_1(0,I+t) +\int_{0}^{t} \frac{\mu_2(s,I+t-s)}{N_2(s)} dL(s),\\ \mu_2(t,I)&= \mu_2(0,I)-\int_{0}^{t} \frac{\mu_2(s,I)}{N_2(s)} dL(s) \end{align} By taking the supremum over all length $\delta$ intervals, we find\begin{align} m_\delta^1(t) &\le m_\delta^1(0)+C_b\int_0^t m_\delta^2(s) dL(s), \label{m1less}\\ m_\delta^2(t) &\le m_\delta^2(0).\label{m2less} \end{align} We then obtain (\ref{mlem1}) by summing (\ref{m1less})-(\ref{m2less}), applying Gronwall's lemma, and observing from initial conditions that $m_\delta(0)
\le 2C_\infty \delta$.
To show (\ref{mlem2}), we will work with \begin{equation} \hat m^j_\delta(t) = \sup_{l\ge 1} \tilde \mu_j(t,I_l), \quad j= 1,2, \qquad
\hat m_\delta(t) =\sum_{j = 1}^2 \hat m^j_\delta(t). \end{equation} Note that \begin{equation} \tilde m^j_\delta(t) \le 2\hat m^j_\delta(t). \label{hatvstilde} \end{equation}
From evaluating measures on $I_l$, the recursion (\ref{updatediscneg1})-(\ref{updatedisc0}) implies that for $l \ge 1$ and $s \in [t_k, t_{k+1})$, \begin{align} \tilde\mu_1(s,I_l) &= \tilde\mu_1(t_{k-1},I_{l+1})+\frac{\Delta\tilde L(t_{k})}{\tilde N_2(t_{k-1})}\tilde\mu_2(t_{k-1},I_{l}), \label{recmu1}\\ \tilde\mu_2(s,I_l) &= \tilde\mu_2(t_{k-1},I_l)\left(1-\frac{\Delta\tilde L(t_{k})}{\tilde N_2(t_{k-1})}\right). \label{recmu2} \end{align} We now take the supremum over $j\ge 1$ in (\ref{recmu1})-(\ref{recmu2}) and sum to obtain \begin{equation}\label{miter} \hat m_\delta(t_k)\le \hat m_\delta(t_{k-1})+C_b\left(\hat m_\delta(t_{k-1})\right)^2. \end{equation}
For simplicity, we rescale by writing $\hat m_\delta(t_k) = A(t_k)\delta$. From (\ref{miter}), and noting \begin{equation}
\hat
m_\delta(0) \le 2\tilde m_\delta(0) = 2m_\delta(0) \le 4C_\infty \delta,
\end{equation}
we obtain the recurrence inequalities \begin{align}\label{eschemea}
A(t_k)&\le A(t_{k-1})+\delta C_b(A(t_{k-1}))^2 , \\
A(0) &\le 4C_\infty. \label{eschemeb} \end{align} In the case of equality, (\ref{eschemea})-(\ref{eschemeb}) is an Euler scheme for the differential equation $g'(t) = C_b g^2(t)$ with $g(0) = C_\infty.$ We may check directly that $A(t_k) \le g(t_k)$ before blowup, or \begin{equation} A(t_k) < \frac{A(0)}{1-kA(0) C_b\delta} \label{aineq} \end{equation} for $t_k \le T_2(\bar f)$. We then use (\ref{aineq}), (\ref{eschemeb}),
and (\ref{hatvstilde}) to obtain (\ref{mlem2}).
Finally, (\ref{mlem3}) follows immediately, since \begin{equation} \tilde N_2(t_k) =1- \sum_{i= 1}^{k} \Delta\tilde L(t_i)\ge 1- \sum_{i= 1}^{k} \tilde m_\delta(t_i). \end{equation} \end{proof}
We finish this subsection with one more estimate related to the incremental loss and total number in the kinetic limit. \begin{lemma} \label{nllem} For $t_k < T_2(\bar f),
$ \begin{align} &\Delta L(t_k): = L(t_k)-L(t_{k-1}) \le C_2 \delta+C_bC_{\infty}^2 \delta ^2, \label{ldiff}\\ &N(t_k)
\ge 1-C_2k\delta -C_bC_{\infty}^2k\delta^2. \label{nbound} \end{align} \end{lemma} \begin{proof} We use (\ref{kinint1}) on the removal rate $f_1(0,t)$ to obtain \begin{align} \Delta L(t_k) &= \int_{0}^\delta f_1(0,t_{k-1}+s)ds\\
&= \int_0^\delta \left(f_1(s,t_{k-1})+ \int_0^s \frac{f_1(0,t_{k-1}+r)}{N_2(t_{k-1}+r)} f_2(s-r,t_{k-1}+r)dr\right)ds\nonumber \\ &\le \mu_1(t_{k-1},[0,\delta])+C_bC_{\infty}^2\delta^2 .\nonumber \end{align} From (\ref{mlem1}), we obtain (\ref{ldiff}). Since $ N(t_k) = 1-\sum_{i = 1}^{k} \Delta L(t_i),$ (\ref{nbound}) also follows. \end{proof}
\subsection{Convergence estimates} \label{convest}
We now use estimates from the previous subsection to establish asymptotics for the differences of total number between the solution of (\ref{kineq1})-(\ref{inits}) and its discretization. We begin with a simple result which follows immediately from Lemma \ref{nllem} comparing incremental losses and total numbers of species. \begin{cor} \label{difflncor} For $t_k < T_2(\bar f), $ \begin{align}
|\Delta L(t_k)-\Delta \tilde L(t_k)| &\le |\mu_1(t_{k-1},[0,\delta])-\tilde
\mu_1(t_{k-1},[0,\delta])|+C_bC_{\infty}^2\delta^2. \\
|N(t_k)- \tilde N(t_k)| &\le \sum_{i = 1}^{k}|\mu_1(t_{i},[0,\delta])-\tilde
\mu_1(t_{i},[0,\delta])|+T(\bar f)C_bC_{\infty}^2\delta. \label{diffncorr} \end{align} \end{cor}
To compare behavior on an interval $I_j$, we will use a formula similar to
(\ref{recmu1})-(\ref{recmu2}) for evolving densities over a time step $\Delta t_k = [t_{k-1}, t_k)$. It follows directly from (\ref{kinint1})-(\ref{kinint2}) that
\begin{align} f_1(x,t_k)&= f_1(x+\delta, t_{k-1})+\int_{\Delta t_k}\frac{ f_2(x+t_k-s,s)}{N_2(s)}f_1(0,s)ds,\label{maineqn1}\\
f_2(x,t_k) &=f_2(x,t_{k-1})- \int_{\Delta t_k} \frac{f_2(x,s)}{N_2(s)} f_1(0,s)ds.\label{maineqn2} \end{align}To arrive at an estimate for the difference of total number in an interval, we use (\ref{updatediscneg1})-(\ref{updatedisc0}) and (\ref{maineqn1})-(\ref{maineqn2}) to express the difference of total number of an interval in Species 1 as \begin{align}\label{fdiff}
&|\mu_1(t_k,I_l)-\tilde \mu_1(t_k,I_l)|\le |\mu_1(t_{k-1},I_{l+1})-\tilde
\mu_1(t_{k-1},I_{l+1})|\\ &+\left|\int_{I_l}\int_{\Delta t_{k}} \frac{f_2(x+t_{k}-s,s)}{N_2(s)} dL(s)dx-\frac{\Delta \tilde L(t_{k})}{\tilde N_2(t_{k-1})} \tilde
\mu_2(t_{k-1},I_{l})\right|.\nonumber \end{align} For Species 2, \begin{align}
|\mu_2(t_k,I_l)-\tilde \mu_2(t_k,I_l)|\le |\mu_2(t_{k-1},I_{l})-\tilde \mu_2(t_{k-1},I_{l})|\label{f2diff2}\\
+\left|\int_{I_l}\int_{\Delta t_{k}} \frac{f_2(x,s)}{N_2(s)} dL(s)dx-\frac{\Delta \tilde L(t_{k})}{\tilde N_2(t_{k-1})} \tilde
\mu_2(t_{k-1},I_{,})\right|.\nonumber \end{align} The following lemma allows us to estimate the integrals in (\ref{fdiff}) and (\ref{f2diff2}) by quantities at discretized times $t_k$.
\begin{lemma} \label{intfest} For $ l\ge 1$ and $t_k < T_2(\bar f),$
\begin{align}
\left|\int_{I_l}\int_{\Delta t_{k}} \frac{f_2(x+t_{k}-s,s)}{N_2(s)} dL(s)dx-\frac{\Delta L(t_{k})}{ N_2(t_{k-1})}
\mu_2(t_{k-1},I_{l})\right|
&= \mathcal O(\delta^3+\delta^2\omega(\delta,0)). \label{tri02} \\
\left|\int_{I_l}\int_{\Delta t_{k}} \frac{f_2(x,s)}{N_2(s)} dL(s)dx-\frac{\Delta L(t_{k})}{ N_2(t_{k-1})}
\mu_2(t_{k-1},I_{l})\right| &=\mathcal O(\delta^3). \label{tri03} \end{align} \end{lemma} \begin{proof} We show (\ref{tri02}). The proof for (\ref{tri03}) is similar. Denote the left hand side of (\ref{tri02}) as $\mathcal I$. Through the triangle inequality, we have\begin{align}
&\mathcal I = \left|\int_{I_l}\int_{\Delta t_{k}} \frac{f_2(x+t_{k}-s,s)}{N_2(s)} -\frac{f_2(x,t_{k-1})}{N_2(t_{k-1})}
dL(s)dx\right| \label{tri0}\\
&\le\int_{I_l} \int_{\Delta t_{k}}\left| \frac{f_2(x+t_{k}-s,s)-f_2(x,t_{k-1})}{N_2(s)}
\right|dL(s)dx\nonumber\\
&+\int_{I_l} \int_{\Delta t_{k}}\left| f_2(x,t_{k-1})\left(\frac{1}{N_2(s)}-\frac{1}{N_2(t_{k-1})}
\right)\right|dL(s)dx\nonumber\\ &:= \mathcal I_1+\mathcal I_2.\nonumber \end{align}
We first show that \begin{equation} \mathcal I_1 \le C_b^2C_{\infty}^2C_2\delta^3+C_1C_2C_b\omega(\delta,0)\delta^2 +\mathcal O(\delta^4+\omega(\delta,0)\delta^3 ). \label{int1est} \end{equation}
This is done by another use of the triangle inequality to align arguments in time and space, \begin{align} \mathcal I_1
\le C_b \Big(\int_{\Delta t_{k}}\int_{I_l}|f_2(x+t_{k}-s,t_{k-1})-f_2(x,t_{k-1})|dxdL(s) \label{int11}\\
+\int_{I_l}\int_{\Delta t_{k}}|f_2(x+t_{k}-s,s)-f_2(x+t_{k}-s,t_{k-1})|dL(s)dx.\Big) \nonumber \end{align} From (\ref{moc1}),(\ref{mlem1}) and (\ref{ldiff}), the first integral may be bounded by \begin{align}
\int_{\Delta t_{k}}\int_{I_l}|f_2(x+t_{k}-s,t_{k-1})-f_2(x,t_{k-1})|dx dL(s)\label{flem1}\\ \le C_1C_2\omega(\delta,0)\delta^2+\mathcal O(\omega(\delta,0)\delta^3).\nonumber \end{align} For the last integrand for (\ref{int11}), we may use (\ref{kinint2}) to obtain \begin{align}
&|f_2(x+t_{k}-s,s)- f_2(x+t_{k}-s,t_{k-1})|\label{flem2}\\&= \int_{t_{k-1}}^{s} \frac{f_2(x+t_{k}-s,r)}{N_2(r)} f_1(0,r)dr\le C_bC_{\infty}^2\delta. \nonumber \end{align} Substituting (\ref{flem1}) and (\ref{flem2}) into (\ref{int11}) then gives (\ref{int1est}).
By a similar calculation we may use (\ref{ldiff}) to obtain \begin{align} \mathcal I_2 &\le C_\infty C_b^2 \delta (\Delta L(t_k))^2 \le C_\infty C_b^2 C_2^2\delta^3+\mathcal O(\delta^4). \end{align} Finally, (\ref{tri02}) then comes from collecting estimates for $\mathcal I = \mathcal I_1+\mathcal I_2$. \end{proof}
We are now ready to compare total numbers over length $\delta$ intervals by defining \begin{equation}
h_\delta^j(t_k) = \sup_{l \ge 1} |\mu_j(t_k,I_l)- \tilde
\mu_j(t_k,I_l)|, \quad h_\delta(t_k) = \sum_{j = 1}^2 h_\delta^j(t_k) . \label{hdef} \end{equation}
Our next lemma gives closed recurrence inequalities for $h_\delta(t_k)$.
\begin{lemma} There exists a $C_3(\bar f)$ dependent on initial conditions such that
for $t_k < T_2(\bar f)$, $h_\delta(t_k)$ satisfies the recurrence inequality \begin{align} h_\delta(t_k) \le h_\delta(t_{k-1})+ C_3\left(\delta^2 \sum_{i = 1}^{k-1} h_\delta(t_i) +\delta h(t_{k-1})+\omega(\delta,0)\delta^2+\delta^3 \right). \label{hrec} \end{align} \end{lemma}
\begin{proof} From Lemma \ref{intfest} and (\ref{fdiff}), for $l \ge 1,$ \begin{align}
&|\mu_1(t_k,I_l)-\tilde \mu_1(t_k,I_l)|\label{fl1diff}\\
&\le |\mu_1(t_{k-1},I_{l+1})-\tilde
\mu_1(t_{k-1},I_{l+1})|\nonumber\\&+\left|\frac{\Delta L(t_{k})}{N_2(t_{k-1})}\mu_2(t_{k-1},I_{l})-\frac{\Delta\tilde
L(t_{k})}{\tilde N_2(t_{k-1})}\tilde \mu_2(t_{k-1},I_{l})\right|\nonumber+\mathcal O(\delta^3+\delta^2\omega(\delta,0)). \nonumber \end{align} Two applications of the triangle inequality yield\begin{flalign}
&\left|\frac{\Delta L(t_{k})}{N_2(t_{k-1})}\mu_2(t_{k-1},I_{l})-\frac{\Delta\tilde L(t_{k})}{\tilde N_2(t_{k-1})}\tilde\mu_2(t_{k-1},I_{l})\right| \label{tri1}\\
&\le \left| \left(\frac 1{N_2(t_{k-1})}-\frac 1{\tilde N_2(t_{k-1})}\right)\Delta L(t_{k})\mu_2(t_{k-1},I_{l})\right|\nonumber\\
&+\left| \frac 1{\tilde N_2(t_{k-1})}\mu_2(t_{k-1},I_l)(\Delta L(t_{k})-\Delta
\tilde L(t_{k}))\right|\nonumber\\&+\left|\frac {\Delta \tilde L(t_{k})}{\tilde N_2(t_{k-1})}(\mu_2(t_{k-1},I_{l})-\tilde
\mu_2(t_{k-1},I_{l}))\right |. \nonumber \end{flalign} We use Lemmas \ref{mbounds} and \ref{nllem}, (\ref{fl1diff}), and (\ref{tri1}) to obtain \begin{align}
&|\mu_1(t_k,I_l)-\tilde \mu_1(t_k,I_l)| \label{mu1ugly}\\
&\le |\mu_1(t_{k-1},I_{l+1})-\tilde
\mu_1(t_{k-1},I_{l+1})|\nonumber\\&+C_b^2C_2^2\delta ^2|N_2(t_{k-1})-\tilde N_2(t_{k-1})|
+C_bC_2\delta|\Delta L(t_{k})-\Delta \tilde L(t_{k})|\nonumber\\&+ C_bC_2\delta|\mu_2(t_{k-1},I_{l})-\tilde
\mu_2(t_{k-1},I_{l})|\nonumber+\mathcal O(\delta^3+\delta^2\omega(\delta,0)).
\nonumber \end{align} Similar bounds hold for Species 2, with \begin{align}
&|\mu_2(t_k,I_l)-\tilde \mu_2(t_k,I_l)| \label{mu2ugly}\\
&\le |\mu_2(t_{k-1},I_{l})-\tilde
\mu_2(t_{k-1},I_{l})|\nonumber\\&+C_b^2C_2^2\delta ^2|N_2(t_{k-1})-\tilde N_2(t_{k-1})|
+C_bC_2\delta|\Delta L(t_{k})-\Delta \tilde L(t_{k})|\nonumber\\&+ C_bC_2\delta|\mu_2(t_{k-1},I_{l})-\tilde
\mu_2(t_{k-1},I_{l})|\nonumber+\mathcal O(\delta^3+\delta^2\omega(\delta,0)).
\nonumber \end{align} We complete the proof by taking the supremum over $l$ for (\ref{mu1ugly}) and (\ref{mu2ugly}), using (\ref{diffncorr}), and then adding to show that for some $C_3$ which depends on $(\bar f_1, \bar f_2)$,
\begin{align}\label{hrecur} &h_\delta(t_k) \le h_\delta(t_{k-1})+C_3\left(\delta^2 \sum_{i = 1}^{k-1} h_\delta(t_i)
+\delta h_\delta(t_{k-1}) +\omega(\delta,0)\delta^2+\delta^3\right). \end{align}
\end{proof} We now show that the recurrence inequality (\ref{hrec}) implies asymptotics
for $h_\delta(t_k)$. \begin{lemma} \label{hasympt} For $t_k < T_2(\bar f)$, \begin{equation}
h_\delta(t_k)= \mathcal O(\delta^2+\delta \omega(\delta,0)). \label{horder} \end{equation} \end{lemma} \begin{proof}
Let $b_\delta(t_k)$ satisfy the recurrence equation \begin{align} b_\delta(t_k) = b_\delta(t_{k-1})+ C_3\left(\delta^2 \sum_{i = 1}^{k-1} b_\delta(t_i) +\delta b_\delta(t_{k-1})+\delta \right),\label{brec} \end{align} with initial condition $b_\delta(0) = h_\delta(0) = 0.$ It follows immediately from induction that
$h(t_k) \le (\delta^2+ \omega(\delta,0)\delta)) b_\delta(t_k)$ for all $k\ge 0$. Thus, it is sufficient to show that $b_\delta(t_k) = \mathcal O(1)$ for all $t_k < T_2(\bar f)$. To see that this holds, note that as $\delta \rightarrow 0$, (\ref{brec}) converges to the linear integro-differential equation \begin{align} \tilde b'(t) = C_3\left(\tilde b(t)+\int_0^t\tilde b(s)ds+1 \right),\quad t \in [0, T_2(\bar f)), \label{bdiffy} \end{align} with initial condition $\tilde b(0) = h_{\delta}(0)$. This can be solved through elementary second order methods to obtain the locally bounded solution\begin{equation} \tilde b(t) = A_1e^{r_1t}+A_2e^{r_2t}, \end{equation} where $A_1= -A_2 = C_3/(r_2-r_1),$ and $r_1,r_2$ are the two real solutions of $r^2-C_3r-C_3 = 0$. \end{proof}
We remark that this theorem also holds when $h_\delta(0) =\mathcal O(\delta^2+\delta \omega(\delta,0))$. This is important for extending the time interval of existence. \subsection{Proof of Theorem \ref{detscheme} } \label{poofdettheo}
With bounds on differences of total numbers restricted to an interval, we are finally ready to show Theorem \ref{detscheme}. We will require two more lemmas, which are straightforward to show. First, we give a formula computing KS distances when only given information about cumulative functions on grid points and bounds on growth between grid points.
\begin{lemma} \label{ksgrid} For $j = 1,2$, suppose we have measures $\nu_j\in \mathcal M(\mathbb R_+)$ with corresponding cumulative functions $F_j(x) = \nu_j([0,x])$, each with
compact support $[0,M]$. For $\delta>0$, we can bound the KS metric by \begin{align}
&d_{KS}(\nu_1, \nu_2) \le \sum_{i = 1}^{\lceil M/\delta \rceil}|\nu_1(I_i)-\nu_2(I_i)| + \sup_{|x-y|=
\delta} (|F_1(x)-F_1(y)|+|F_2(x)-F_2(y)|). \nonumber \end{align} \end{lemma}
We will also require a lemma comparing differences of solutions of kinetic equations under small changes in time.
\begin{lemma} \label{gridalign}
If $|t_1-t_2| \le \delta$, then \begin{equation} d((\mu_1(t_1), \mu_2(t_1)),(\mu_1(t_2), \mu_2(t_2)) )= \mathcal O( \delta+ \omega(\delta,0)). \end{equation} \end{lemma}
\begin{proof} Similar to the proof of Lemma \ref{nllem}. \end{proof}
\textit{Proof of Theorem \ref{detscheme}}. Let $t \in [0,T_2(\bar f)]$, $\delta>0$, and $K = \lfloor \frac{t}{\delta}\rfloor$, which means $t_K$ is the largest discretized time which is at most $t$. From Lemmas \ref{mbounds}, \ref{hasympt}, \ref{ksgrid}, and \ref{gridalign}, and recalling that $\tilde \mu_j(t)$ is constant in intervals $t\in [t_{k-1}, t_k)$, we compute\begin{align} &d((\mu_1(t), \mu_2(t)), (\tilde \mu_1(t), \tilde \mu_2(t)))\label{distres}\\&\le d((\mu_1(t), \mu_2(t)), ( \mu_1(t_K), \mu_2(t_K)))+d((\mu_1(t_K), \mu_2(t_K)), (\tilde \mu_1(t_K), \tilde \mu_2(t_K)))\nonumber\\&\le \sum_{j = 1}^2\sum_{l
= 1}^{\lceil M/\delta \rceil} |\mu_j(t_K,I_l)-\tilde \mu_j(t_K,I_l)|+ 2\left(m_\delta(t_K)+\tilde m_\delta(t_K)\right)+\mathcal O( \delta+ \omega(\delta,0)) \nonumber\\& \le 2\lceil M/\delta \rceil h_\delta(t_K)+ 2(m_\delta(t_K)+\tilde m_\delta(t_K)) + \mathcal O( \delta+ \omega(\delta,0))\nonumber\\ &= \mathcal O( \delta+ \omega(\delta,0)). \nonumber \end{align}
Finally, let us argue for extending the time interval of existence to any $T'<T(\bar f)$. Consider initial conditions $\mu^{(2)}(0) = \mu(T_2(\bar f))$ and $\tilde \mu^{(2)}(0) = \tilde \mu(T_2(\bar f))$, and new time bounds
\begin{equation} T_1^{(2)}(\bar f) = \sup\{t: N_2^{(2)}(t)> 2N_2^{(2)}(0)/3\}, \quad T_2^{(2)} = T^{(2)}_1 \wedge 1/(8\tilde C_{\infty}\tilde C_b), \end{equation} with \begin{align}
\tilde C_{\infty} = \max_{j = 1,2}\sup_{s \in [0,T'(\bar f)]} \|f_j(x,s)\|_{\infty},\quad \tilde C_b=2/\inf_{t<T'}N_2(t). \end{align} Then for sufficiently small $\delta$, it follows that Lemmas \ref{moclem}-\ref{hasympt} hold, with possibly larger constants, and therefore Theorem \ref{detscheme} holds for the time interval $[0,T_{2}+T_2^{(2)}]$ from stitching solutions. This argument may be repeated to produce $T^{(k)}$. Each new time interval either adds the constant $ 1/(8\tilde C_{\infty}\tilde C_b)$ (which does not depend on $k$) or reduces $N_2$ by at least 4/5. After finitely many extensions we will reach a time $T^{(K)}$ at which $N_2(T^{(K)})$ is arbitrarily small, so that $t$ is arbitrarily close to $T(\bar f)$. This completes the proof of Theorem \ref{detscheme}.
\section{Comparison of particle system and deterministic scheme} \label{part}
\subsection{Stochastic analogues of Section \ref{disc}} We now compare the discretized measures described in the Section \ref{disc} with the $n$-particle PDMP $\{X^n(t)\}_{t\ge 0}$. To do so, it will help to express evolution of $\mu_\sigma^n$ from $t_k$ to $t_{k+1}$ in a form that is similar to the iterative formulas (\ref{updatediscneg1})-(\ref{updatedisc0}). For defining analogous notation to the the discretization scheme, we denote the number of mutations occurring in the time interval $t\in \Delta t_k = [t_{k-1}, t_k)$ as $\Delta L^n(t_k)n$. At each
$i \in 1, \dots, \Delta L^n(t_k)n$, mutation time
$\tau_i^k$ denote when a particles at position $x_i\ge 0$ in Species 2 mutates into Species 1. For particles in Species 2 which mutate in interval $I$ during $\Delta t_k $, the empirical measure of their positions at mutation times is defined by \begin{equation} \pi^n_2(t_k,I) = \frac 1n\sum_{i = 1}^{\Delta L^n(t_k)n} \mathbf 1(x_i \in I). \end{equation} We define $\pi_1^n$ as the empirical measure for the positions of mutated particles at time $t_k$, with \begin{align} \pi^n_1(t_k,I)&= \frac 1n\sum_{i = 1}^{\Delta L^n(t_k)n} \mathbf 1(x_i \in I+\tau_i-t_{k-1})\\ &:=\frac 1n\sum_{i = 1}^{\Delta L^n(t_k)n} Q_k^i(I).\nonumber \end{align} Updates for measures on intervals during a time step may then be succinctly written as\begin{align} \mu^n_1(t_k,I) &= \mu^n_1(t_{k-1},I+\delta)+\pi^n_1(t_{k},I) \label{pdmprec1}\\
\mu^n_2(t_k,I) &= \mu^n_2(t_{k-1},I)-\pi^n_2(t_{k},I). \label{pdmprec2} \end{align} From (\ref{nfcond}), the time interval of existence, under sufficiently many particles $n > n_0(\bar f)$, before reaching the cemetery state is at least $T_1(\bar f)$, but since we are comparing the particle system with the deterministic discretization, we will work with the smaller time interval $[0, T_2(\bar f)]$.
Let us present variables to be used for the particle system which are similar to those found in Section \ref{disc}. We begin with an analogue to $m_\delta$ given in (\ref{mdef}) for defining the maximum total numbers of length $\delta$ intervals as \begin{equation}
m^{j;n}_\delta(t) = \sup_{|I| \le \delta} \mu_j^n(t,I), \quad m_\delta^n(t) =\sum_{j = 1,2} m^{j;n}_\delta(t). \end{equation}From (\ref{pdmprec1})-(\ref{pdmprec2}), it follows for all realizations of $X^n(t)$ that \begin{equation}
m_\delta^n(t_k) \le m_\delta^n(t_{k-1})+\sum_{j = 1}^2\sup_{I,|I|\le \delta}\pi^n_j(t_{k},I). \label{mbound} \end{equation}
There is also a particle system analog of $h_\delta$, where we now compare total number in intervals between $X^n(t)$ and the discretization scheme as \begin{equation} h_\delta^{j;n}(t_k) = \sup_{l \le M/\delta
} |\mu_j^n(t_k,I_l)- \tilde
\mu_j(t_k,I_l)|, \quad h_\delta^n= \sum_{j = 1}^2 h_\delta^{j;n}(t_k). \label{hndef} \end{equation} The use of measures $\tilde \mu(t_k)$ rather than $\mu(t_k)$ comes from ability to use the recurrence (\ref{recmu1})-(\ref{recmu2}), along with
(\ref{pdmprec1})-(\ref{pdmprec2}) and (\ref{updatediscneg1})-(\ref{updatedisc0}), to write
the recurrence inequality \begin{align} h^n_\delta(t_k) &\le h^n_\delta(t_{k-1}) +\sum _{j = 1}^2\max_{ l \le \tilde
M/\delta}
\left|\pi_j^n(t_k,I_l)-\frac{\Delta\tilde
L(t_{k})}{\tilde N_2(t_{k-1})}\tilde \mu_2(t_{k-1},I_l)\right|\label{hmainrec}\\
&:=h^n_\delta(t_{k-1}) + \Pi^n (t_k). \nn \end{align}
From (\ref{mbound}) and (\ref{hmainrec}), the major task for controlling $h^n$ and $m^n$ will clearly hinge on appropriate estimates for $\pi_j$ (and subsequently $ \Pi^n$). These key bounds are provided in the next two lemmas. \subsection{Concentration bounds for $\pi_j$}
We begin with an easy to establish generalization of the Hoeffding
inequality, which states that for $n\ge 1$ and with $B_i(n,p) \sim \mathrm{Binom}(n, p)$, \begin{equation}
\mathbb P\left(\left|\frac{B_i(n,p)}{n}- p\right|>\varepsilon\right) \le 2e^{-2n\varepsilon^2}. \end{equation}
\begin{cor}\label{hoofthing} For $n \ge 1$, let $X^n = \sum_{i = 1}^n Z_i$, where $Z_i \sim \mathrm{Ber}(p_i)$ are independent Bernoulli random variables with parameters $0 \le \underline p \le p_i \le \bar p \le 1$. Then for $\varepsilon>0$, \begin{align} \mathbb P\left( \frac{X^{n}}{ n}-\bar p > \varepsilon\right) &\le 2e^{-2n\varepsilon^2}\quad \hbox{and} \quad \mathbb P\left( \frac{X^{n}}{ n}-\underline p <- \varepsilon\right)\le 2e^{-2n\varepsilon^2}\label{hoffgen1} . \end{align}
\end{cor}
For the next two lemmas, we will be interested in cases where parameters $p_i$ are themselves $[0,1]$-valued random variables with known lower and upper bounds. In particular, we will use the \emph{mutation probability} $P_k^1(t,I)$ defined as the state-dependent probability that if a mutation occurs at time $t\in [t_{k-1}, t_k)$, then the mutated particle would be located in $I$ at time $t_k$. We also define $P_k^2(t,I)$ as the probability that a particle would mutate in $I$ from Species 2 at time $t$.
Thus, \begin{align} P_k^1(t,I) =\frac{\mu_2^n(t^-,I+t_k-t)}{N_2^n(t^-)}, \quad P_k^2(t,I) =\frac{\mu_2^n(t^-,I)}{N_2^n(t^-)}. \end{align}
For an initial distribution $(\mu^n_1(0), \mu_2^n(0))$ with support $[0, M]$, tracking which intervals mutations occur in during a time interval $[t_{k-1}, t_k)$ can be represented through a random sum of multinomials of one draw with random selection probabilities. In particular, we perform a total of $\Delta L^n(t_k)$ draws with $\lceil M/\delta \rceil $ bins, in which for each draw the $i$th bin has the mutation probability $P_k^2(\tau_i^-,I)$ of being selected.
The following two lemmas comparing $\pi_j$ with bounds for $P_k^j(t,I)$ and $\Delta L^n(t_k)$ involve using Corollary \ref{hoofthing} along with the strong Markov property of PDMPs introduced by Davis \cite{dav84}. For the first lemma, we consider an initial
state $X^n = X^n(0)$ which has pathwise bounds during $\Delta t_1$ for $\Delta L^n(t_1), $ and $P^j_1(t,I_l)$. The second lemma assumes these bounds occur with some probability which may be less than 1. Since $X^n(t)$ is homogeneous, both these lemmas are readily applicable when considering transitions during $\Delta t_k$ for $k>1$.
We will use common notation for stochastic ordering: for real-valued random variables $X$ and $Y$, we write $X \le_{ST}Y$ if $\mathbb P(X>c) \le \mathbb P(Y>c)$ for all real $c$. Also note in the next two lemmas, we will suppress time arguments when no confusion will arise. Finally, for simplicity with presentation, we will assume that $M/\delta$ is integral.
\begin{lemma} \label{lohidevroye} Let $X^n(0) = \mathbf x \in E$ be an initial state such that for $ l \in 1,\dots, M/\delta$, $ j \in \{1,2\}$, and $t\in \Delta t_1$, \begin{align}
\underline L\le \Delta L^n \le \bar L \quad \hbox{ and } \label{devbnd1} \quad \underline p_l\le P_1^j(t,I_l) \le \bar p_l, \end{align}
where $\underline L, \bar L, \underline p_l, \bar p_l$ are $[0,1]$-valued constants. Then for $\varepsilon > 0$, \begin{align} \mathbb P\left(\max _{l\le M/\delta} \left(\pi^n_j(t_1,I_l)-\bar L\bar p_l\right) > \varepsilon \right) \le\frac {2M}{\delta}\mathbb \exp(-2n\varepsilon^2/\bar L) \label{mainpithm1} \end{align} and \begin{align} \mathbb P\left(\min _{l\le M/\delta}\left(\pi^n_j(t_1,I_l)-\underline L\underline p_l \right)<- \varepsilon \right) \le\frac {2M}{\delta} \exp(-2n\varepsilon^2/\underline L). \end{align}
\end{lemma}
\begin{proof}
We will show (\ref{mainpithm1}) for $j = 1$. The proofs for the other cases are similar. For the parameter $q \in [0,1]$, denote $\{B_i(q)\}_{i \ge 1}$ as an iid stream of Bernoulli random variables with $B_1 \sim \mathrm{Ber}(q)$. We write \begin{equation} \bar Q_{l}^i := B_i(\bar p^n_l), \quad Q_{l}^i :=\begin{cases}Q^i_1(I_l) & i \le\Delta L^nn, \\ \bar Q_{l}^i & i >\Delta L^nn \\ \end{cases}. \end{equation} We use iterated conditioning to show that \begin{align}
\mathbb P(Q_{l}^i = 1) = \mathbb E[Q_{l}^i ] =\mathbb E[\mathbb E[Q_{l}^i |X(\tau_1^{i-})]
]= \mathbb E[P_1^1(\tau_1^{i-},I_l)] \label{argstart} \le \bar p_l =\mathbb P(\bar Q_{l}^i= 1). \end{align}
This calculation implies the stochastic ordering $Q_{l}^i \le_{ST} \bar Q_{l}^i$.
Next, we show \begin{equation} \pi^n_1(I_l)\le_{ST} \frac 1n\sum_{i = 1}^{\bar Ln}
Q_{l}^i\le_{ST} \frac 1n\sum_{i = 1}^{\bar Ln} \bar Q_{l}^i . \label{qineqs} \end{equation} The left inequality is immediate, and in fact holds for all paths in $X^n(t)$. To show the right inequality, we use induction, assuming that for $1\le j <\bar Ln$, \begin{align} \mathbb P\left(\sum_{i = 1}^{j} Q_{l}^i>c\right) &\le \mathbb P\left(\sum_{i = 1}^{j} \bar Q_{l}^i >c\right). \end{align}
The base case holds trivially. For the inductive step, we use the low of total probability to show \begin{align} \mathbb P\left(\sum_{i = 1}^{j+1} Q_{l}>c\right) &= \mathbb E\left[\mathbb P\left(\sum_{i = 1}^{j} Q_{l}^i
+Q_{l}^{j+1} >c \Big | X((\tau_1^{j+1})^-)\right)\right]\label{argend}\\ &\le \mathbb E\left[\mathbb P\left(\sum_{i = 1}^{j} Q_{l}^i +\bar Q^{j+1}_{l}>c
\Big | X((\tau_1^{j+1})^-)\right)\right]\nn \\
&= \mathbb P\left(\sum_{i = 1}^{j} Q_{l}^i +\bar Q^{j+1}_{l}>c \right)\le \mathbb P\left(\sum_{i = 1}^{j+1} \bar Q_{l}^i >c\right). \nn \end{align} The first inequality in (\ref{argend}) uses a well-known property for stochastic dominance when summing random variables: if $X_1$ and $X_2$ are independent, $Y_1$ and $Y_2$ are independent, and $X_i \le_{ST} Y_i$ for $i = 1,2$, then $X_1+X_2 \le_{ST} Y_1+Y_2$. From the strong Markov property of PDMPs, the $\mathcal F(\tau_k^{j})$-measurable quantity $\sum_{i = 1}^{j} Q_{k;l}^i $ and the $\mathcal F(\tau_k^{j+1})$-measurable quantity $ Q_{k;l}^{j+1}
$ are conditionally independent under $\mathbb P(\cdot|X((\tau_k^{j+1})^-)$. The last inequality uses the same property of stochastic dominance along with the induction hypothesis.
With (\ref{argend}) we then obtain our result from Lemma \ref{hoofthing}, with\begin{align}
& \mathbb P\left(\max _{l\le M/\delta} \left( \pi^n_1(I_l)
-\bar L\bar p_l\right) > \varepsilon \right) \le \sum_{l\le M/\delta}\mathbb P\left( \pi^n_1(I_l)
-\bar L\bar p_l > \varepsilon\right) \label{condo1}\\
&\le \sum_{l\le M/\delta} \mathbb P\left( \frac 1n\sum_{i = 1}^{\bar Ln} \bar Q_{l}^i- \bar L\bar p_l > \varepsilon\right) \le\frac {2 M}{\delta} \exp(-2n\varepsilon^2/\bar L). \nn \end{align} \end{proof}
While Lemma \ref{lohidevroye} is useful for computing total numbers in Species 1 and 2, we find in Section \ref{hdiffsec} that for comparing the PDMP to our deterministic discretization, it is necessary to consider another estimate of $\pi_i$ in which the bounds on $\Delta L(t_k)$ and $P_k^j$ may not hold with small probability. This differs from Lemma \ref{lohidevroye}, in which assume such bounds occur over all paths given an appropriate initial condition.
For the next lemma and in many other places, we will frequently rely on an elementary inequality derived from the law of total probability:\ for events $C$ and $D$, \begin{equation}
\mathbb P(C) = \mathbb P(C|D)\mathbb P(D) + \mathbb P(C|D^c)\mathbb P(D^c)\le
\mathbb P(C|D)+ \mathbb P(D^c). \label{probid} \end{equation}
\begin{lemma} \label{lohidevroye2} Let $\mathcal T $ be an event such that \begin{align} &\mathcal T \subset \{ \underline L\le \Delta L^n\le \bar L\} \cap \mathcal \{P_1^j(t,I_l) \in[ \underline p_l, \bar p_l]:l \le M/\delta, j = 1,2, t \in \Delta t_1 \}, \nn \end{align} where $\underline L, \bar L,\underline p_l,$ and $\bar p_l$ are $[0,1]$-valued constants. Suppose for some $ r(\delta, n)\in [0,1)$ that \begin{equation} \mathbb P(\mathcal T^c) \le r(\delta,n). \label{tbound} \end{equation}Then for $j = 1,2$ and $\varepsilon >0$, \begin{align} &\mathbb P\left(\max _{l\le M/\delta} \left(\pi^n_j(t_1, I_l)-\bar L\bar p_l\right)
> \varepsilon\Big|\mathcal T\right) \le\frac {2M}{\delta}\mathbb \exp(-2n\varepsilon^2/\bar L)+\frac M \delta\left( \frac{r}{1-r}+\frac{2r\bar Ln}{(1-r)^2}\right) \label{mainpithm4} \end{align} and\begin{align} &\mathbb P\left(\min _{l\le M/\delta}\left(\pi^n_j(t_1, I_l)-\underline
L\underline p_l
\right)<- \varepsilon \Big|\mathcal T \right)\le\frac {2M}{\delta}\mathbb \exp(-2n\varepsilon^2/\underline L) +\frac M \delta( 2r\bar Ln+r). \label{mainpithm5} \end{align}
\end{lemma}
\begin{proof} We show (\ref{mainpithm4}) and (\ref{mainpithm5}) for $j = 1$. We first show by induction that \begin{align} \mathbb P\left(\sum_{i = 1}^{j} Q_{l}>c \right)\le\mathbb P\left(\sum_{i = 1}^{j} \bar Q_{l}>c \right)+ \frac{2rj}{1-r}, \quad j= 1, \dots, \bar L n. \label{qind} \end{align} We will condition on the $\mathcal F(X(\tau_1^{i-}))$-measurable event\begin{equation} \mathcal T_i= \{\underline p_l\le P_1^1(\tau_1^{i-},I_l) \le \bar p_l\} \supseteq \mathcal T. \end{equation}
The base case for (\ref{qind}) follows from (\ref{probid}) and (\ref{tbound}) since \begin{align}
\mathbb P\left( Q_{1}>c \right) &\le \mathbb P\left( Q_{1}>c|\mathcal T_1 \right)+\mathbb P(\mathcal T_{1} ^c)\le \frac{\mathbb P\left( \bar Q_{1}>c \right)}{1-r}+r\\ &\le \mathbb P\left( \bar Q_{1}>c \right)+\frac{r}{1-r}+r\le \mathbb P\left( \bar Q_{1}>c \right)+\frac{2r}{1-r}. \nn \end{align} For the inductive step, assuming (\ref{qind}) holds for $ 1\le j<\bar Ln $, we use the strong Markov property of PDMPs to show \begin{align}
&\mathbb P\left(\sum_{i = 1}^{j+1} Q_{l}>c \right) \le \mathbb P\left(\sum_{i = 1}^{j} Q_{l}+Q_{j+1}>c\Big |\mathcal T_{j+1} \right)+\mathbb P(\mathcal T_{j+1} ^c)&\\&\le \mathbb P\left(\sum_{i
= 1}^{j} Q_{l}+\bar Q_{j+1}>c\Big |\mathcal T_{j+1} \right)+r \le \mathbb P\left(\sum_{i = 1}^{j} Q_{l}+\bar Q_{j+1}>c \right)+\frac{2r}{1-r} \nn \\ &\le \mathbb P\left(\sum_{i = 1}^{j+1} \bar Q_i>c \right)+\frac{2rj}{1-r} + \frac{2r}{1-r} = \mathbb P\left(\sum_{i = 1}^{j+1} \bar Q_{i}>c \right)+\frac{2r(j+1)}{1-r}. \nn \end{align}
From calculations similar to (\ref{argstart})-(\ref{argend}), we use (\ref{qineqs}) to show \begin{align}
&\mathbb P(\pi^n_1(I_l)>c |\mathcal T) \le \mathbb P\left(\frac 1n\sum_{i
= 1}^{\bar Ln} Q_{l}>c\Big |\mathcal T\right)\\&\le \frac 1{1-r}\mathbb P\left(\frac 1n\sum_{i = 1}^{\bar Ln} Q_{l}>c \right) \le\frac 1{1-r}\mathbb P\left(\frac 1n\sum_{i = 1}^{\bar Ln} \bar Q_{l}>c \right)+ \frac{2r\bar Ln}{(1-r)^2} \nn\\ &\le \mathbb P\left(\frac 1n\sum_{i = 1}^{\bar Ln} \bar Q_{l}>c \right)+ \frac{r}{1-r}+\frac{2r\bar Ln}{(1-r)^2}. \nn \end{align}
For the lower bound, we note \begin{align}
\mathbb P(Q_{l}^i = 1)\ge \mathbb P(Q_{l}^i= 1|\mathcal T_i)\mathbb P(\mathcal T_i) \ge \mathbb P(\underline Q_{l}^i = 1)-r(\delta, n). \label{baseq} \end{align} We also note, for an event $A$ and $j \le \bar Ln$, \begin{align}
\mathbb P(A)-r \le \mathbb P(A)-\mathbb P(\mathcal T_{j} ^c) \le \mathbb P(A)-\mathbb P(\mathcal T_{j} ^c)\mathbb P(A|\mathcal T_{j} ^c) =\mathbb P(\mathcal T_{j} )\mathbb P(A|\mathcal T_{j} ) \le \mathbb P(A|\mathcal T_{j} ). \end{align} We again use induction to show \begin{equation} \mathbb P\left(\sum_{i = 1}^{j} Q_{l}>c \right) \ge \mathbb P\left(\sum_{i = 1}^{j} \bar Q_{i}>c \right)-2rj, \quad j= 1, \dots, \bar L n. \label{qind2} \end{equation}
Showing the base case is similar to (\ref{baseq}). For the induction step,
\begin{align} &\mathbb P\left(\sum_{i = 1}^{j+1} Q_{l}>c \right) \ge \mathbb P\left(\sum_{i
= 1}^{j+1} Q_{l}>c\Big |\mathcal T_{j+1} \right)\mathbb P(\mathcal T_{j+1} ) \ge \mathbb P\left(\sum_{i
= 1}^{j} Q_{l}+Q_{j+1}>c\Big |\mathcal T_{j+1} \right)-r\\&\ge \mathbb P\left(\sum_{i
= 1}^{j} Q_{l}+\underline Q_{j+1}>c\Big |\mathcal T_{j+1} \right) -r\ge \mathbb P\left(\sum_{i = 1}^{j} Q_{l}+\underline Q_{j+1}>c \right)-2r \nn\\ &\ge \mathbb P\left(\sum_{i = 1}^{j+1} \underline Q_i>c \right)-2r(j+1). \nn \end{align} With (\ref{qind2}), we then can show \begin{align}
&\mathbb P(\pi^n_1(I_l)>c |\mathcal T) \ge \mathbb P\left(\frac 1n\sum_{i
= 1}^{\underline Ln} Q_{l}>c\Big |\mathcal T\right)\\&\ge \mathbb P\left(\frac 1n\sum_{i = 1}^{\underline Ln} Q_{l}>c \right) -r\ge\mathbb P\left(\frac 1n\sum_{i = 1}^{\underline Ln} \underline Q_{l}>c \right)- 2r\bar Ln-r. \nn \end{align} Upon taking complements, we arrive at\begin{equation}
\mathbb P(\pi^n_1(I_l)<c |\mathcal T) \le \mathbb P\left(\frac 1n\sum_{i = 1}^{\underline Ln} \underline Q_{l}<c \right)+ 2r\bar Ln+r. \label{pigreat} \end{equation}
The result then follows from mirroring the calculations of (\ref{condo1}), with
\begin{align} & \mathbb P\left(\max _{l\le M/\delta} \left( \pi^n_1(I_l)
-\bar L\bar p_l\right) \nn
> \varepsilon \Big|\mathcal T \right) \le \sum_{l\le M/\delta}\mathbb P\left( \pi^n_1(I_l)
-\bar L\bar p_l
> \varepsilon \Big|\mathcal T\right) \\&\le \sum_{l\le M/\delta} \mathbb P\left(\frac 1n \sum_{i = 1}^{\bar Ln} \bar Q_{l}^i-\bar L\bar p_l > \varepsilon\right)+\frac M \delta\left( \frac{r}{1-r}+\frac{2r\bar Ln}{(1-r)^2}\right) \nn \\ &\le \frac {2M}{\delta}\mathbb \exp (-2 n\varepsilon^2/\bar L) +\frac M \delta\left( \frac{r}{1-r}+\frac{2r\bar Ln}{(1-r)^2}\right). \label{condo6} \end{align} A similar calculation using (\ref{pigreat}) yields (\ref{mainpithm5}). \end{proof}
\subsection{PDMP lemmas on growth}
In this section, we derive bounds for total number in an interval. Our first estimate is a simple pathwise bound between time intervals. \begin{lemma}\label{crudelem} For all realizations of $X^n(t)$, if $s \in [t_{k-1}, t_k)$, then \begin{equation} m_\delta^n(s) \le 3m_\delta^n(t_{k-1}). \label{firstmbound} \end{equation} \end{lemma} \begin{proof} For Species 2, note that $m^{2;n}_\delta(t)$ is decreasing in $t$. As for Species 1, a particle in an interval $I$ of size $\delta$ at time $s\in [ t_{k-1}, t_k)$ must have been located, at time $t_{k-1}$, in $I$ in Species 2 or $I+\delta$ in either Species 1 or 2, so that \begin{align} \mu_1^n(s, I)&\le\mu_1^n(t_{k-1}, I)+\mu_1^n(t_{k-1}, I+\delta )+\mu_2^n(t_{k-1}, I+\delta),\\ \mu_2^n(s, I) &\le\mu_2^n(t_{k-1}, I). \end{align} Taking the supremum over length $\delta$ intervals then yields (\ref{firstmbound}).
\end{proof}
From (\ref{nfcond}), there is $n_1(\bar f) \ge n_0(\bar f)$ such that for
$n>n_1(\bar f)$, we can use the constant \begin{equation} C_4 = \inf_{n \ge n_1} N^n_2(T_2(\bar f))>N_2(0)/2. \end{equation} For our next lemma, we compare $m_\delta^n(t_k)$ with its deterministic analog $\bar m_\delta(\tau)$, defined through the recurrence \begin{equation} \label{mrecur} \bar m_\delta(t_k) = \bar m_\delta(t_{k-1})+ 24C_4\bar m_\delta(t_{k-1})^2, \quad \bar m_\delta(0) = m_\delta^{n}(0). \end{equation} We may use the same reasoning as in Lemma \ref{mbounds} to show that there exists $\delta^p>0$ such that for $0 < \delta <\delta^p$, we can find $n^{p}(\delta,\bar f)>n_1(\bar f) $ such that for $n>n^{p}(\delta,\bar f)$ there exist positive constants $C_5,\hat C_5>0$ such that \begin{equation}\label{mbarbound} \hat C_5 \delta\le \bar m_\delta(t_k) \le C_5\delta. \end{equation} For the remaining lemmas in this section, we will assume that $0 < \delta <\delta^p$ and $n> n^p(\delta)$.
Our interest is in whether interval growth in the particle system exceeds that of $\bar m_\delta$. Whether this occurs is expressed in the sequence of events \begin{equation} \mathcal A_k = \{m_\delta^n(t_k) > \bar m_\delta(t_k)\}, \quad 0 \le t_k \le T_2. \end{equation} Our next lemma shows that conditioned under $\mathcal A_{k-1}^c$, we can use Lemma \ref{lohidevroye} to obtain a concentration inequality for $\pi_j$ and consequently $m_\delta^n(t_k)$.
\begin{lemma}\label{pilem} Let $j = 1,2.$ For $0< t_k \le T_{2}$,
\begin{equation}\label{pibound}
\mathbb P\left(\sup_{|I|\le \delta}\pi^n_j(t_k,I)> 12C_4
\bar m_\delta(t_{k-1})^2|\mathcal A_{k-1}^c\right) \le \frac{2M}{\delta}\exp(-\tilde C_6\delta^3 n),\end{equation} where $ \tilde C_6 = 72C_4^2 \hat C_5^3$. \end{lemma}
\begin{proof}
We give a proof for $j = 1$, with the proof for $j = 2$ being nearly identical. First, observe that under $\mathcal A_{k-1}^c$ there will be at most $\Delta L^n(t_k) n\le \bar m_\delta(t_{k-1})n$ mutations during $\Delta t_k = [t_{k-1},t_k)$, , since only particles contained in $[0,\delta)$ at $t_{k-1}$ for Species 1 and 2 may possibly reach the origin in Species 1. Under the event $\mathcal A_{k-1}^c$, we use Lemma \ref{crudelem} to show mutation probabilities are uniformly bounded from above by the deterministic quantity \begin{equation}\label{onemutation} P_k^1(\tau,I) = \frac{\mu_2^n(t^-,I+t_k-t)}{N_2^n(t^-)}\le 3C_4 \bar m_\delta(t_{k-1}). \end{equation}
Thus, using From (\ref{mbarbound}), we can then use Lemma \ref{lohidevroye} with $\bar L =\bar m_\delta(t_{k-1})$, $\bar p_l = 3C_4
\bar m_\delta(t_{k-1})$, and $\varepsilon =6C_4 (\bar m_\delta(t_{k-1}))^2$, which gives \begin{align}\label{intpibound}
\mathbb P\left(\max_{l\le M/\delta}\pi^n_1(t_k, I_l)>6C_4 (\bar m_\delta(t_{k-1}))^2|\mathcal A_{k-1}^c\right) \le\frac{2M}{\delta}\exp(-\tilde C_6\delta^3 n). \end{align} Note that to apply Lemma \ref{lohidevroye} we used the fact that $X^n(t)$ is homogeneous. Indeed, we can write the left hand side of (\ref{intpibound}) as \begin{equation}
\mathbb P\left(\max_{l\le M/\delta}\pi^n_1(t_k, I_l)>6C_4 (\bar m_\delta(t_{k-1}))^2|\mathcal A_{k-1}^c\right) = \mathbb P\left(\max_{l\le M/\delta}\pi^n_1(t_1, I_l)>6C_4 (\bar m_\delta(t_{k-1}))^2|\tilde{\mathcal A}^c\right), \end{equation} where $\tilde {\mathcal A} = \{m_\delta^n(0) > \bar m_\delta(t_{k-1})\}$ gives requirements on the initial condition so that $\Delta L(t_1)\le \bar L$, and $P_k^1(\tau,I_l) \le \bar p_l$ pathwise in $\Delta t_1$.
We can extend (\ref{intpibound}) to hold over all intervals of size less than $\delta$, not just those on a grid. This is done by noting that for any measure $\mu$, if $\mu(I_k) \le a$ on a uniform grid $I_k$ of size $\delta$, then for any $I$ with $|I| \le \delta$, $\mu(I) \le 2a$. Thus, (\ref{pibound}) follows from (\ref{intpibound}) and (\ref{mbarbound}), since \begin{align}
&\mathbb P\left(\sup_{I,|I|\le \delta}\pi^n_1(t_k, I)>12C_4
\bar m_\delta(t_{k-1})^2|\mathcal A_{k-1}^c\right)\le\mathbb P\left(\max_{l\le M/\delta}\pi^n_1(t_k, I_l)>6C_4
\bar m_\delta(t_{k-1})^2|\mathcal A_{k-1}^c\right). \nn \end{align} \end{proof}
We can now derive a concentration inequality which shows that $m^n_\delta (t_k) = \mathcal O(\delta) $ with high probability.
\begin{lemma}\label{mlem} For $0 \le t_k \le T_{2},$ \begin{align}\label{probintbnd} \mathbb P(m_\delta^n(t_k)>C_5\delta ) \le\mathbb P\left(\mathcal A_k\right)\le \frac{4 Mk}{\delta}\exp(-\tilde C_6\delta ^3n). \end{align}
\end{lemma}
\begin{proof}
The proof follows from induction, in which we assume \begin{equation} \mathbb P\left(\mathcal A_l\right)\le \frac{4Ml}{\delta}\exp(-\tilde C_6\delta ^3n) \label{abound} \end{equation} holds for $0\le l<k$. The base case for when $k = 0$ follows since $\mathbb P(\mathcal A_0 ) = 0$. To show the inductive step, we can use Lemma \ref{pilem} and the recurrence inequalities (\ref{mbound}) and (\ref{mrecur}) to show \begin{align}
&\mathbb P(\mathcal A_{k}|\mathcal A_{k-1}^c) \label{aifnota}\\
& \le \mathbb P\left(m_\delta^n(t_{k-1})+ \sum_{j = 1}^2\sup_{I,|I|\le \delta}\pi^n_j(t_{k},I)> \bar m_\delta(t_{k-1})+ 24C_4\bar m_\delta(t_{k-1})^2\Big|\mathcal A_{k-1}^c\right) \nn \\&\le \sum_{j = 1}^2\mathbb P\left(\sup_{I,|I|\le \delta}\pi^n_j(t_{k},I)
> 12C_4\bar m_\delta(t_{k-1})^2|\mathcal A_{k-1}^c\right)\le\frac{4M}{\delta}\exp(-\tilde C_6\delta^3 n). \nn \end{align}
We may then apply (\ref{probid}) with $C = \mathcal A_{k}$ and $D = \mathcal A_{k-1}^c$, and then apply (\ref{abound}) for $l = k-1$, along with (\ref{aifnota}) and (\ref{mbarbound}) to obtain the right inequality of (\ref{probintbnd}) (the left inequality follows immediately from (\ref{mbarbound})). \end{proof}
We finish this subsection with an estimate for total mutations during $\Delta t_k$ analogous to Lemma \ref{nllem}.
\begin{lemma}
\label{Lvscdf}For $0< t_k \le T_{2}$,
\begin{equation}
\mathbb P(|\Delta L^n(t_k)-\mu_1^n(t_{k-1},[0,\delta])|>C_6\delta^2) \le \frac{4Mk}{\delta}\exp(-\tilde C_7\delta ^3n), \label{llemma} \end{equation} with $C_6 = 4C_2C_5^2$ and $\tilde C_7 = \tilde C_6 \wedge (18C_4 ^2C_5^3)$.
\end{lemma}
\begin{proof}
Denote $\mathcal M_{\delta}^n(t_k)$ as the normalized total number of mutations that affect Species 2 particles in the interval $[0,\delta)$ during time $[t_{k-1},t_k)$. This may be written as
\begin{equation} \mathcal M_{\delta}^n(t_k) = \frac 1n\sum_{i = 1}^{\Delta L(t_k)} M_i^k, \end{equation} where $M_i^k$ is an indicator random variable for the event that the $i$th mutation during $\Delta t_k$ occurs within $[0, \delta)$.
All particles of Species 1 located in $[0,\delta)$ at $t_{k-1}$ with reach the origin and trigger a mutation by time $t_k$. The only other particles in the system which potentially hit the origin are those initially located in $[0,\delta)$ in Species 2 which have mutated during $\Delta t_k$. It follows that under all paths in $X^n(t)$, \begin{equation}
|\Delta L^n(t_k)-\mu_1^n(t_{k-1},[0,\delta])|\le \mathcal M^n_{\delta}(t_k). \label{mainmut} \end{equation} Thus proving (\ref{llemma}) follows from showing an equivalent estimate on $\mathcal M_{\delta}^n(t_k)$. Toward that end, note that under $\mathcal A_{k-1}^c$
the number of mutations during time $\Delta t_k$ is less than $\bar L n= C_5\delta n $. Also, the probability of selecting a Species 2 particle to mutate in $[0,\delta]$ at each mutation time during $\Delta t_k$ is bounded by $\bar p = 3C_4 C_5\delta$. Let $B_i(q)$ denote an iid stream of Bernoulli random variables with parameter
$q \in [0,1]$. It follows that under $\mathbb P(\cdot|\mathcal A_{k-1}^c)$, from arguments similar to Lemma \ref{lohidevroye}, that\begin{equation} \mathcal M_{\delta}^n(t_k) \le_{ST} \frac 1n\sum_{i = 1}^{\bar Ln} B_i(\bar p). \quad \end{equation}
For $C_6 = 6C_2C_5^2$, from the Hoeffding inequality, \begin{align}\label{mutbound}
\mathbb P(\mathcal M^n_{\delta}(t_k)>C_6\delta^2|\mathcal A_{k-1}^c) \le \mathbb P\left(\sum_{i = 1}^{\bar Ln} B_i(\bar p)/(\bar L n)-\bar p>\bar p\right) &\le 2\exp(-2\bar L\bar p^2 n) \\ &= 2\exp(-18C_4^2C_5^3\delta^3 n). \nn \end{align} The lemma then follows from applying Lemma \ref{mlem}, (\ref{probid}), (\ref{abound}),and (\ref{mutbound}). \end{proof}
\subsection{Difference of total number on an interval } \label{hdiffsec}
We now begin our estimates comparing the deterministic discretization $\tilde \mu_j$ and empirical measures $\mu^n_j$. As in Section \ref{convest}, our focus is on differences of measures restricted to length $\delta$ intervals.
For inequalities related to bounding $P^j_k$ with $h_\delta^{j;n}$, we will need to consider a modulus of continuity for the deterministic discretization, defined by \begin{equation}
\tilde \omega(s ,t_k) = \sup_{I:|I| = \delta} \sum_{j = 1}^2 |\tilde
\mu_j(t_k, I+s)- \tilde \mu_j(t_k, I)|, \end{equation}
Fortunately, we can compare $\tilde \omega(\delta, t_k)$ with $\omega(\delta, t_k)$, the modulus of continuity for the solutions to the kinetic equations, through the following lemma.
\begin{lemma} \label{omegalem2} There exists $C_8>0$ such that for $0<s\le \delta$, \begin{equation} \tilde \omega(s, t_k) \le C_8(\omega(\delta, 0) \delta+\delta^2). \end{equation} \end{lemma}
\begin{proof} For $j = 1,2$,
\begin{align}&|\tilde \mu_j(t_k, I+s)- \tilde \mu_j(t_k, I)|\\&\le | \tilde \mu_j(t_k, I+s)- \mu_j(t_k, I+s)|+| \mu_j(t_k, I+s)- \mu_j(t_k, I)|+| \mu_j(t_k, I)- \tilde \mu_j(t_k, I)|. \nn \end{align}
Summing over $j$ and taking suprema gives\begin{align} \tilde \omega(s, t_k) \le \omega(s, t_k)\delta +2h_{ \delta}(t_k). \end{align} The result then follows from Lemmas \ref{moclem} and \ref{hasympt}. \end{proof}
We now give a pathwise inequality over $\Delta t_k$ for comparing mutation probabilities. \begin{lemma} \label{pnattk} For $\tau \in \Delta t_k$,
there exists $C_9>0$ such that for all paths in $X^n(t),$
\begin{align}
|P^j_k(\tau,I)- P^j_k(t_{k-1},I)| \le C_9(h^{n}(t_{k-1})+\pi_2^n(t_k)+(m^n(t_{k-1}))^2+\omega(\delta, 0) \delta+\delta^2).\label{bigplem} \end{align} \end{lemma}
\begin{proof}
For $j = 1$ (the proof for $j = 2$ is similar), we write
\begin{align}
&|P^1_k(\tau,I)- P^1_k(t_{k-1},I)| = \left| \frac{ \mu_2^n(\tau, I+t_k-\tau)}{N^n(\tau)}+\frac{
\mu_2^n(t_{k-1}, I+\delta)}{N^n(t_{k-1})}\right| \\
&\le C_4(|\mu_2^n(\tau, I+t_k-\tau)-\mu_2^n(t_{k-1}, I+\delta)|+\mu_2^n(t_{k-1}, I+\delta)(N^n(\tau)-N^n(t_{k-1}))) \nn\\
&\le C_4(|\mu_2^n(\tau, I+t_k-\tau)-\mu_2^n(t_{k-1}, I+\delta)|+(m^n(t_{k-1}))^2. \nn \end{align} We may then break up terms further, with\begin{align}
&|\mu_2^n(\tau, I+t_k-\tau)-\mu_2^n(t_{k-1}, I+\delta)|\\
&\le |\mu_2^n(\tau, I+t_k-\tau)- \mu_2^n(t_{k-1}, I+t_k-\tau)|\nn\\&+| \mu_2^n(t_{k-1}, I+t_k-\tau)- \tilde \mu_2(t_{k-1}, I+t_k-\tau)|\nn \\
&+|\tilde \mu_2(t_{k-1}, I+t_k-\tau)- \tilde \mu_2(t_{k-1}, I+\delta
)|+ |\tilde \mu_2(t_{k-1}, I+\delta)-\mu_2^n(t_{k-1}, I+\delta)|\nn\\ &\le \pi_2^n(t_k)+ 2h^n(t_{k-1})+ \tilde \omega(\delta-t_k-\tau, t_{k-1}).\nn \end{align} The result follows from applying Lemma (\ref{omegalem2}). \end{proof}
Here we collect previous results to form a high-probability event under which we can bound $h^n_\delta(t_k)$. \begin{lemma}
There exist positive constants $C_{10}, C_{11}, C_{12} $ such that the events \begin{align} \mathcal C(t_k) &= \left\{\sup_l \pi_2^n(t_k,I_l) > C_{10}\delta^2\right\},\\
\quad \mathcal L(t_k) &= \cup _{t_k \le T_{2}}\{|\Delta L^n(t_k)-\mu_1^n(t_{k-1},[0,\delta])|>C_{10}\delta^2\},
\\\mathcal D(t_{k+1})&= \mathcal C(t_{k+1})^c\cap \mathcal A(t_k) \nn^c\cap \mathcal L(t_{k+1})^c. \end{align} satisfy \begin{align}
\mathbb P(\mathcal D(t_k)^c|\mathcal A(t_{k-1})^c)
\le \frac{C_{12}}{\delta} \exp(-C_{11}\delta ^3n). \label{newrbound} \end{align}
\end{lemma}
\proof This follows immediately from applying (\ref{pibound}) with (\ref{mainmut}) and (\ref{mutbound}) to the event\be
\mathbb P(\mathcal D(t_k)^c|\mathcal A(t_{k-1})^c)
\le \mathbb P( \mathcal C(t_k)|\mathcal A(t_{k-1})^c)+ \sum_{k\le \lceil T_{2}/\delta\rceil } \mathbb P (\mathcal L(t_{k})|\mathcal A(t_{k-1})^c). \nn \qed \ee
The upshot of using $\mathcal D(t_k)$ is that we may bound selection probability and total losses by quantities which are $\mathcal F(t_{k-1})$ measurable. Furthermore, we may we use $\mathcal D(t_k)$ for the event $\mathcal T$ in Lemma \ref{lohidevroye2} to produce concentration bounds for $h(t_k)$.
\begin{lemma} \label{pandlbounds} Under $\mathcal D(t_k)$, there exists $C_{13}$ such that for $t\in \Delta t_k$, \begin{equation} P_k^j(t,I_l) \in [\underline p_l(t_{k-1}),\bar p_l(t_{k-1})], \qquad \Delta L(t_k) \in [ \underline L^n(t_{k-1}), \bar L^n(t_{k-1})], \end{equation} where \begin{align} \bar p_l(t_{k-1})&= P^2_k(t_{k-1},I_l)+C_{13}(h^n(t_{k-1})+\omega(\delta, 0) \delta +\delta^2), \label{pstart}\\ \underline p_l(t_{k-1})&= P^2_k(t_{k-1},I_l)-C_{13}(h^n(t_{k-1})+\omega(\delta, 0) \delta +\delta^2), \label{pend}\\
\bar L^n(t_{k-1}) &=\mu_1^n(t_{k-1},[0,\delta])+C_{10}\delta^2, \label{lstart}\\
\underline L^n(t_{k-1}) &=\mu_1^n(t_{k-1},[0,\delta]).\label{lend} \end{align} \end{lemma}
\begin{proof}
We obtain (\ref{pstart})-(\ref{pend}) from Lemma \ref{pnattk}, and (\ref{lstart})-(\ref{lend}) follows from Lemma \ref{Lvscdf}. \end{proof}
\begin{lemma} Let \be H^n(t_{k}): =\delta^3+\omega(\delta, 0) \delta^2 +\delta h^n(t_{k})+\delta^2\sum_{i < k}h^n(t_i), \ee
and let $ \Pi$ be defined as in (\ref{hmainrec}). Then there are positive constants $C_{16},\tilde C_{16},C_{17}$ such that \be
\mathbb P( \Pi^n (t_k) >C_{17}H^n(t_{k-1})|\mathcal D(t_k)) \le \frac{ C_{16}}{\delta} \exp(-\tilde C_{16}\delta ^5n).\label{tildepineq} \ee \end{lemma} \begin{proof}
We begin with breaking up $\Pi^n$ as \begin{align}
\Pi^n (t_k) &\le2\max_{l \le M/\delta}\left|\frac{\tilde \mu_2(t_{k-1},I_l)}{\tilde N_2(t_{k-1})}\Delta\tilde
L(t_{k})- \mu^n_1(t_{k-1}, [0,\delta])P^2_k(t_{k-1},I)\right| \\
&+\sum_{j = 1}^2\max_{l \le M/\delta}\left|\pi_j^n(t_k,I_l)- \mu^n_1(t_{k-1}, [0,\delta])
P^2_k(t_{k-1},I_l)\right| \nn\\ &:=G^n(t_k)+ \tilde \Pi^n(t_k). \nn \end{align}
From Lemma \ref{pandlbounds}, we can rewrite $\mu_1^n(t_{k-1},[0,\delta])$ in terms of $\underline L^n(t_{k-1})$ and $\bar L^n(t_{k-1})$, and also $P^2_k(t_{k-1},I_l)$ in terms of $\underline p_l(t_{k-1})$ and $\bar p_l(t_{k-1})$, from which we can then bound $\tilde \Pi^n(t_k)$, for some $C_{14}>0$, by
\begin{align}
&\mathbb P(\tilde \Pi^n(t_k)>\varepsilon|\mathcal D(t_k)) \label{justpi}\\
&\le \sum_{j = 1}^2 \mathbb P(\max_{l}\left|\pi_j^n(t_k,I_l)- \mu^n_1(t_{k-1}, [0,\delta])
P^2_k(t_{k-1},I_l)\right|>\varepsilon/2|\mathcal D(t_k)) \nn\\ &\le \sum_{j = 1}^2 \Big[ \mathbb P(\max_{l }(\pi_j^n(t_k,I_l)- \mu^n_1(t_{k-1}, [0,\delta])
P^2_k(t_{k-1},I_l))>\varepsilon/2|\mathcal D(t_k))\nn\\ &+ \mathbb P(\min_{l }(\pi_j^n(t_k,I_l)- \mu^n_1(t_{k-1}, [0,\delta])
P^2_k(t_{k-1},I_l))<-\varepsilon/2|\mathcal D(t_k))\Big]\nn \\ & \le \sum_{j = 1}^2 \Big[ \mathbb P(\max_{l }(\pi_j^n(t_k,I_l)-
\bar L^n(t_k)\bar p^n_l(t_{k-1}))+C_{14}H(t_{k-1})>\varepsilon/2|\mathcal D(t_k))\nn\\
&+ \mathbb P(\min_{l }(\pi_j^n(t_k,I_l)-\underline L^n(t_k)\underline p^n_l(t_{k-1}))-C_{14}H(t_{k-1})<-\varepsilon/2|\mathcal D(t_k))\Big]\nn. \end{align}
For all paths in $\mathcal D(t_{k}),$ by calculations similar to (\ref{tri1})-(\ref{hrecur}), there exists a positive constant $C_{15}>C_{14}$ such that
\begin{align} & G^n(t_k) \le C_{15}H^n(t_{k-1}). \end{align} and thus \begin{align}
\mathbb P( \Pi^n (t_k) >8C_{15}H^n(t_{k-1})|\mathcal D(t_{k}))
\le \mathbb P( \tilde \Pi^n(t_k)>4C_{15}H^n(t_{k-1})|\mathcal D(t_{k})). \label{pitilde} \end{align}
Consider a path $\omega:[0,t_{k-1}]\rightarrow E$ such that $\omega \in \mathcal A(t_{k-1})^c$. We now invoke (\ref{pitilde}), (\ref{justpi}), and Lemma \ref{lohidevroye2} with $\varepsilon(t_{k-1}) = C_{15}H^n(t_{k-1})$, $C_{17} = 8C_{15}$, and $r(\delta, n) = \frac{C_{12}}{\delta} \exp(-C_{11}\delta ^3n)$ from (\ref{newrbound}) to obtain
\begin{align}
& \mathbb P( \Pi^n (t_k)>C_{17}H^n(t_{k-1};\omega)|\mathcal D(t_k)) \\ &\le \sum_{j = 1}^2\mathbb P\left(\max _{l \le M/\delta}\left( \pi^n_j(t_k, I_l)- \bar L^n(t_{k-1};\omega)\bar p^n_l(t_{k-1};\omega)
\right)> \varepsilon^n(t_{k-1};\mathbf{x})|\mathcal D(t_k) \right)\nn \\ &+ \sum_{j = 1}^2 \mathbb P\left(\min _{l \le M/\delta}\left( \pi^n_j(t_k, I_l)- \underline L^n(t_{k-1};\omega)\underline p^n_l(t_{k-1};\omega)
\right)<- \varepsilon^n(t_{k-1};\mathbf{x})|\mathcal D(t_k) \right)\nn \\ &\le \frac{8M}{\delta} \exp(-2C_{15}^2 H^n(t_{k-1};\omega)^2n/\bar L^n(t_{k-1};\omega))\\ &+\frac{2M}{\delta}\left(\frac{r}{1-r}+\frac{2rn\bar L^n(t_{k-1};\omega)}{1-r}+2rn \bar L^n(t_{k-1};\omega)+r\right)\\ &:= J_1(t_{k-1},n;\omega) +J_2(t_{k-1},n;\omega). \end{align} By increasing $n^p$ used in obtain (\ref{mbarbound}), if necessary, elementary calculations show that for $n>n^{p}$, \begin{equation} J_2(t_{k-1},n;\omega)\le \frac{18M}{\delta^2} \exp(-C_{11}\delta ^3n/2) \cdot \bar L^n(t_{k-1};\omega). \end{equation} On the other hand, since $\mathcal D(t_k) \subset \mathcal A(t_{k-1})^c$, for $X(t_{k-1}) = \omega' \in \mathcal A(t_{k-1})$ we have \
\begin{equation}
\mathbb P( \Pi^n (t_k)>C_{17}H^n(t_{k-1};\omega')|\mathcal D(t_k)) = 0. \end{equation} From the law of total probability, \begin{align}
&\mathbb P( \Pi^n (t_k)>C_{17}H^n(t_{k-1})|\mathcal D(t_k))\\
&\le \mathbb E[J_1(t_{k-1},n) |\mathcal A(t_{k-1})^c]+\mathbb E[J_2(t_{k-1},n)|\mathcal A(t_{k-1})^c] \nn\\ &\le \frac{ C_{16}}{\delta}\exp(-\tilde C_{16}\delta ^5n)\nn \end{align} for a sufficiently small $\tilde C_{16}>0$ and sufficiently large $C_{16}>0$. In the last inequality, we used the simple pathwise bound of $H^n(t_k) \ge \delta^3$ and that $\bar L = \mathcal O(\delta) $ under $\mathcal A(t_{k-1})^c$. \end{proof}
\subsubsection{Proof of Theorem \ref{finalh}}
We consider the events \begin{align} \mathcal H(t_k) = \cup_{l\le k} \{h^n(t_k) > h^n(t_{k-1})+C_{17} H^n(t_{k-1})\},\\ \mathcal B(t_k;C) =\{ d(\tilde \mu(t_k), \mu^n(t_k)) > C(\delta +\omega(\delta, 0))\}. \end{align} Using the same argument given in Lemma \ref{hasympt}, under $\cap_{t_k\le T_2} \mathcal H(t_k)^c$ and a sufficiently large $C_{18}$,
\begin{equation}
h^n(t_k) \le C_{18} \mathcal ( \delta ^2+ \delta \omega(\delta,0)) \quad t_k \le T_2. \label{hnbound}
\end{equation}We may then use (\ref{hnbound}) with Lemma \ref{ksgrid} to show that for a sufficiently large $C_{19}$, \begin{equation} \cap_{t_k\le T_2}\mathcal H(t_k)^c \subseteq \cap_{l\le k} \mathcal B(t_k;C_{19})^c \end{equation} and that for sufficiently large $C_{20}$ and small $\tilde C_{20}$, \begin{align} &\mathbb P\left( \max_{t_k\le T_2}d(\tilde \mu(t_k), \mu^n(t_k)) > C_{19} ( \delta + \omega(\delta))\right) = \mathbb P(\cup_{t_k \le T_2}
\mathcal B(t_k;C_{19})) \label{almost}\\&\le \sum_{t_k \le T_2} \mathbb P(\mathcal H(t_k)|\mathcal D(t_k)) +\mathbb P(\mathcal D(t_k)^{c}) \nn\\
&\le \sum_{t_k \le T_2} \mathbb P( \Pi^n (t_k) >C_{17}H^n(t_{k-1})|\mathcal D(t_k)) +\mathbb P(\mathcal D(t_k)^{c})\nn \\ &\le\frac{ C_{20}}{\delta^2} \exp(-\tilde C_{20}\delta ^5n). \nn \end{align}
To conclude, we may replace the maximum in (\ref{almost}) with a supremum. Indeed, since $\tilde \mu(t)$ is constant during time intervals $\Delta t_k$, for $t \le T_2$ and $K = \lfloor \frac{t}{\delta}\rfloor$, \begin{equation} d(\tilde \mu(t), \mu^n(t)) \le d(\tilde \mu(t_{K}), \mu^n(t_{K}))+d( \mu^{n}(t), \mu^n(t_{K})). \end{equation} During $\Delta t_k$, an interval can change its total number by at most $\sum_{j = 1,2} \pi_j(t_k,I)$, and thus
\begin{equation} d(\mu^{n}(t), \mu^n(t_{K})) \le \sum_{\substack{l \le M/\delta\\j = 1,2}} \pi_j(t_k, I_{l}) \le \frac{ M}{\delta} \sup_{\substack{l \le M/\delta\\j = 1,2}} \pi_j(t_k, I_{l}). \end{equation} Then for sufficiently large $C_p> 2C_{19}$ and $C_{2}^p$, and sufficiently small $C_3^p$. \begin{align} &\mathbb P\left( \sup_{t\le T_{2}}d(\tilde \mu(t), \mu^n(t)) > C^p ( \delta + \omega(\delta))\right)\label{probdistres}\\ &\le \mathbb P\left(\max_{t_{k}\le T_{2}}d(\tilde \mu(t_{k}), \mu^n(t_{k}))> C _{19}( \delta + \omega(\delta))\right)+\sum_{j = 1}^2 \mathbb P\left(\sup_{\substack{l \le M/\delta\\t_k \le T_2}} \pi_j(t_k, I_{l})> \frac{C _{p}\delta^{2}}{2M}\right)\nn\\ &\le \frac{ C_{2}^p}{\delta^2} \exp(- C_{3}^p\delta ^5n). \nn \end{align}
Finally, through a similar stitching argument argument appealed to in the proof of Theorem \ref{detscheme} at the end of Section \ref{disc}, we may extend (\ref{probdistres}) to any $T'<T(\bar f)$, which yields Theorem \ref{finalh}.
\section{Proof of Theorem \ref{solution}} \label{prooftheo1}
For deriving an explicit solution for (\ref{kinint1})-(\ref{kinint2}), we assume $(f_1(x,t), f_2(x,t))\in Z^2$ for $t \in T(\bar f)$, and show that such a solution must be given by the explicit expressions (\ref{aexp})-(\ref{f2exp}), and later verify that this solution is in fact in $Z^2$.
We begin by integrating (\ref{kinint2}) over space, giving \begin{equation} N_2(t) = N_2(0)-L(t). \label{formn2} \end{equation} This implies that $N_2(t)$ is differentiable, with \begin{equation} \dot N_2(t) = -a(t). \label{ndotis} \end{equation} Substituting (\ref{ndotis}) into (\ref{kinint2}) yields the simple form \begin{equation} f_2(x,t) = \frac{N_2(t)}{N_2(0)}\bar f_2(x). \label{f2expr} \end{equation} Another substitution of (\ref{f2expr}) into (\ref{kinint1}) then gives \begin{align} f_1(x,t) = \bar f_1(x+t)+\int_0^t \frac{\bar f_2(x+t-s)}{N_2(0)} a(s)ds. \end{align}
It remains to express $a(t)$ and $N_2(t)$ in terms of initial conditions. Setting $x = 0$, we arrive at the closed equation \begin{equation}\label{fluxeqn} a(t) =\bar f_1(t)+\int_0^t \frac{\bar f_2(x+t-s)}{N_2(0)} a(s)ds. \end{equation} Denoting the probability density $\hat f_2(s) = \frac{\bar f_2(s)}{N_2(0)}$, we may rewrite (\ref{fluxeqn}) as the integral equation \begin{equation}\label{renewaleqn} a(t) = \bar f_1(t)+\int_0^t a(t-s)\hat f_2(s)ds. \end{equation} Equation (\ref{renewaleqn}) is a renewal equation, which has been studied extensively in probability theory (see \cite[Chapter XI]{feller1974introduction} for an introduction). It is well-known that there exists a unique solution for (\ref{renewaleqn}) given by \begin{align} a(t) &= \sum_{j = 0}^\infty \hat f_2^{*(j)}(t)*\bar f_1(t) := Q_{\hat f_2}(t)*\bar f_1(t), \end{align} where the exponent $*(k)$ denotes $k$-fold self-convolution. Then (\ref{nform}) follows directly from (\ref{formn2}). For a locally bounded density $p$, the operator $Q_p(t)$ is also locally bounded, (see Thm. 3.18
of~\cite{liao2013applied}) . Thus, it is clear that $a(t)$ and $N(t)$ are both positive, locally bounded, and continuous for $0\le t < T(\bar f)$, and subsequently that $(f_1(x,t), f_2(x,t))\in Z^2$. This completes the derivation for part (a) for Theorem \ref{solution}.
For showing part (b), the only ambiguity is in establishing for a fixed $t \in [0,T(\bar f))$, the map $p \mapsto Q_p(t)$ is in $C(L^1([0,T(\bar f)), \mathbb R_+)$. We will use a probabilistic argument. Let $X_i^{(p)}$ $i = 1,2,\dots$ denote a sequence of $[0,\infty)$ valued, iid random variables, each with a probability density $p \in L^1(\mathbb R_+)$. The number of renewals up to time $t <\infty$ is given by \begin{equation} N_p(t) = \sup\left\{k:\sum_{i = 1}^k X_i^{(p)} \le t\right\}. \end{equation}
In renewal theory, $Q_p(t)$ is the well-known \textit{renewal density}, satisfying \begin{equation} \int_0^tQ_p(t) = \mathbb E[N_p(t)]-1. \end{equation} Each term in the sum of $Q_p(t)$ also has a probabilistic interpretation, with \begin{equation}\label{probsumless} c_k^{(p)}(t):=\int_0^t f^{*(k)}(t) = \mathbb P\left(\sum_{i = 1}^k X_i^{(p)} \le t\right), \quad k \ge 1. \end{equation} Estimates for the decay of $c_k^{(p)}(t)$ as $k \rightarrow \infty$ can be obtained from Markov's inequality, with \begin{align}
c_k^{(p)}(t) &= \mathbb P\left(\exp\left(-\sum_{i = 1}^k X_i^{(p)}\right) \ge e^{-t}\right) \label{sumjust1}\\ &\le e^t \mathbb E[\exp(-X_1^{(p)})]^k. \label{sumjust2} \end{align} As $X^{(p)}$ is a non-deficient random variable,
$\mathbb P(X = 0) \neq 1.$ Then $\mathbb E[\exp(-X_1^{(p)})]<1$, and thus $c_k^{(p)}(t)$ decays exponentially as $k\rightarrow \infty$.
To show continuity, fix $p \in L^1(\mathbb R_+)$, $t \in [0, T(\bar f))$, and let $\varepsilon >0$. From (\ref{sumjust1})-(\ref{sumjust2}), $c_k^{p}(t)$ is summable in $k$, so we may choose a $K>0$ such that
\begin{equation} \sum_{i = K}^\infty c_k^{(p)}(t)< \varepsilon/6. \end{equation} Since $\mathbb E[\exp(-X_1^{(p)})]$ varies continuously with respect with $p$ in $L^1(\mathbb R_+)$, a similar calculation to (\ref{sumjust1})-(\ref{sumjust2}) implies that the map $p \mapsto c_k^{(p)}(t)$ is also in $ C( L^1(\mathbb R_+), \mathbb R_+)$ for all $k \ge 1$. Furthermore, tail sums of $c_k^{(p)}(t)$ also vary continuously in the $p$ variable. Thus, there exists $\delta>0$
such that if $\tilde p\in L^1(\mathbb R_+)$ satisfies $\|p-\tilde p\| _{L^1(\mathbb R_+)} < \delta$, then both \begin{equation} \sum_{i = K}^\infty c_k^{(\tilde p)}(t) < \varepsilon/3 \quad \hbox{and}
\quad \sum_{i = 1}^{K-1} |c^{(p)}_k(t)- c^{(\tilde p)}_k(t)| < \varepsilon/2 \end{equation}
hold. It then follows that \begin{align}
|Q_p(t)-Q_{\tilde p}(t)| &\le \sum_{i = 1}^{K-1} |c^{(p)}_k(t)- c^{(\tilde p)}_k(t)|+\sum_{i = K}^\infty c_k^{( p)}(t)+\sum_{i = K}^\infty c_k^{(\tilde p)}(t)<\varepsilon.
\end{align}
\textbf{Acknowledgments:}{ The author wishes to thank Anthony Kearsley and Paul Patrone for providing guidance during his time as a National Research Council Postdoctoral Fellow at the National Institute of Standards and Technology, and also Govind Menon for several discussions regarding the manuscript.}
\end{document} | arXiv |
Two quadrilaterals are considered the same if one can be obtained from the other by a rotation and a translation. How many different convex cyclic quadrilaterals are there with integer sides and perimeter equal to 32?
$\textbf{(A)}\ 560 \qquad \textbf{(B)}\ 564 \qquad \textbf{(C)}\ 568 \qquad \textbf{(D)}\ 1498 \qquad \textbf{(E)}\ 2255$
As with solution $1$ we would like to note that given any quadrilateral we can change its angles to make a cyclic one.
Let $a \ge b \ge c\ge d$ be the sides of the quadrilateral.
There are $\binom{31}{3}$ ways to partition $32$. However, some of these will not be quadrilaterals since they would have one side bigger than the sum of the other three. This occurs when $a \ge 16$. For $a=16$, $b+c+d=16$. There are $\binom{15}{2}$ ways to partition $16$. Since $a$ could be any of the four sides, we have counted $4\binom{15}{2}$ degenerate quadrilaterals. Similarly, there are $4\binom{14}{2}$, $4\binom{13}{2} \cdots 4\binom{2}{2}$ for other values of $a$. Thus, there are $\binom{31}{3} - 4\left(\binom{15}{2}+\binom{14}{2}+\cdots+\binom{2}{2}\right) = \binom{31}{3} - 4\binom{16}{3} = 2255$ non-degenerate partitions of $32$ by the hockey stick theorem. We then account for symmetry. If all sides are congruent (meaning the quadrilateral is a square), the quadrilateral will be counted once. If the quadrilateral is a rectangle (and not a square), it will be counted twice. In all other cases, it will be counted 4 times. Since there is $1$ square case, and $7$ rectangle cases, there are $2255-1-2\cdot7=2240$ quadrilaterals counted 4 times. Thus there are $1+7+\frac{2240}{4} = \boxed{568}$ total quadrilaterals. | Math Dataset |
Graded vector space
In mathematics, a graded vector space is a vector space that has the extra structure of a grading or gradation, which is a decomposition of the vector space into a direct sum of vector subspaces, generally indexed by the integers.
For "pure" vector spaces, the concept has been introduced in homological algebra, and it is widely used for graded algebras, which are graded vector spaces with additional structures.
Integer gradation
Let $\mathbb {N} $ be the set of non-negative integers. An $ \mathbb {N} $-graded vector space, often called simply a graded vector space without the prefix $\mathbb {N} $, is a vector space V together with a decomposition into a direct sum of the form
$V=\bigoplus _{n\in \mathbb {N} }V_{n}$
where each $V_{n}$ is a vector space. For a given n the elements of $V_{n}$ are then called homogeneous elements of degree n.
Graded vector spaces are common. For example the set of all polynomials in one or several variables forms a graded vector space, where the homogeneous elements of degree n are exactly the linear combinations of monomials of degree n.
General gradation
The subspaces of a graded vector space need not be indexed by the set of natural numbers, and may be indexed by the elements of any set I. An I-graded vector space V is a vector space together with a decomposition into a direct sum of subspaces indexed by elements i of the set I:
$V=\bigoplus _{i\in I}V_{i}.$
Therefore, an $\mathbb {N} $-graded vector space, as defined above, is just an I-graded vector space where the set I is $\mathbb {N} $ (the set of natural numbers).
The case where I is the ring $\mathbb {Z} /2\mathbb {Z} $ (the elements 0 and 1) is particularly important in physics. A $(\mathbb {Z} /2\mathbb {Z} )$-graded vector space is also known as a supervector space.
Homomorphisms
For general index sets I, a linear map between two I-graded vector spaces f : V → W is called a graded linear map if it preserves the grading of homogeneous elements. A graded linear map is also called a homomorphism (or morphism) of graded vector spaces, or homogeneous linear map:
$f(V_{i})\subseteq W_{i}$ for all i in I.
For a fixed field and a fixed index set, the graded vector spaces form a category whose morphisms are the graded linear maps.
When I is a commutative monoid (such as the natural numbers), then one may more generally define linear maps that are homogeneous of any degree i in I by the property
$f(V_{j})\subseteq W_{i+j}$ for all j in I,
where "+" denotes the monoid operation. If moreover I satisfies the cancellation property so that it can be embedded into an abelian group A that it generates (for instance the integers if I is the natural numbers), then one may also define linear maps that are homogeneous of degree i in A by the same property (but now "+" denotes the group operation in A). Specifically, for i in I a linear map will be homogeneous of degree −i if
$f(V_{i+j})\subseteq W_{j}$ for all j in I, while
$f(V_{j})=0\,$ if j − i is not in I.
Just as the set of linear maps from a vector space to itself forms an associative algebra (the algebra of endomorphisms of the vector space), the sets of homogeneous linear maps from a space to itself – either restricting degrees to I or allowing any degrees in the group A – form associative graded algebras over those index sets.
Operations on graded vector spaces
Some operations on vector spaces can be defined for graded vector spaces as well.
Given two I-graded vector spaces V and W, their direct sum has underlying vector space V ⊕ W with gradation
(V ⊕ W)i = Vi ⊕ Wi .
If I is a semigroup, then the tensor product of two I-graded vector spaces V and W is another I-graded vector space, $V\otimes W$, with gradation
$(V\otimes W)_{i}=\bigoplus _{\left\{\left(j,k\right)\,:\;j+k=i\right\}}V_{j}\otimes W_{k}.$
Hilbert–Poincaré series
Given a $\mathbb {N} $-graded vector space that is finite-dimensional for every $n\in \mathbb {N} ,$ its Hilbert–Poincaré series is the formal power series
$\sum _{n\in \mathbb {N} }\dim _{K}(V_{n})\,t^{n}.$
From the formulas above, the Hilbert–Poincaré series of a direct sum and of a tensor product of graded vector spaces (finite dimensional in each degree) are respectively the sum and the product of the corresponding Hilbert–Poincaré series.
See also
• Graded (mathematics)
• Graded algebra
• Comodule
• Graded module
• Littlewood–Richardson rule
References
• Bourbaki, N. (1974) Algebra I (Chapters 1-3), ISBN 978-3-540-64243-5, Chapter 2, Section 11; Chapter 3.
| Wikipedia |
\begin{document}
\begin{abstract} This paper is devoted to study ergodic optimisation problems for almost-additive sequences of functions (rather than a fixed potential) defined over countable Markov shifts (that is a non-compact space). Under certain assumptions we prove that any accumulation point of a family of Gibbs equilibrium states is a maximising measure. Applications are given in the study of the joint spectral radius and to multifractal analysis of Lyapunov exponent of non-conformal maps.\end{abstract}
\maketitle
\section{Introduction} In statistical mechanics a very important problem is that of describing how do Gibbs states varies as the temperature changes. Of particular importance is the case when the temperature decreases to zero. Indeed, this case is related to ground states, that is, measures supported on configurations of minimal energy \cite[Appendix B.2]{efs}. It turns out that materials at low temperature tend to be highly ordered, they might even reach crystal or quasi crystal configurations. Grounds states are the measures that account for this phenomena (see \cite[Chapter 3]{bll} for details). A similar problem, in the context of dynamical systems, has been the subject of great interest over the last years. Indeed, given a dynamical system $(\Sigma, \sigma)$ and an observable $\phi:\Sigma \toÊ{\mathbb R}$ we say that a $\sigma$-invariant measure $\mu$ is a \emph{maximising} measure for $\phi$ if \begin{equation*} \int \phi \ d \mu = \sup \left\{ \int \phi \ d \nu : \nu \in {\mathcal M} \right\}, \end{equation*} where ${\mathcal M}$ denotes the set of $\sigma-$invariant probability measures. In certain cases, some maximising measures can be described as the limit of Gibbs states as the temperature goes to zero. Indeed, assume that $(\Sigma, \sigma)$ is a transitive sub-shift of finite type defined over a finite alphabet and that $\phi$ is a H\"older potential. It is well known that for every $t\in {\mathbb R}$ there exists a unique Gibbs state $\mu_t$ (which is also an equilibrium measure) for the potential $t \phi$ (see \cite[Theorem 1.2 and 1.22]{bow}). It turns out that if $\mu$ is any weak-star accumulation point of $\{\mu_t\}_{t> 0}$ then $\mu$ is a maximsing measure (see, for example, \cite[Section 4]{j1}). Note that the value $t$ can be thought of as the inverse of the temperature, hence as the temperature decreases to zero the value of $t$ tends to infinity. The theory that studies maximising measures is usually called \emph{ergodic optimisation}. See \cite{bo,j1,j2} for more details.
The purpose of the present paper is to study a similar problem in the context of countable Markov shifts and for sequences of potentials. To be more precise, let $(\Sigma, \sigma)$ be a countable Markov shift and let ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ an almost-additive sequence of continuous functions $\log f_n : \Sigma \to {\mathbb R}$ (see Section \ref{sec:backg} for precise definitions). We say that a $\sigma$-invariant measure $\mu$ is a \emph{maximising} measure for ${\mathcal F}$ if \begin{equation*}
\alpha({\mathcal F}):= \sup \left\{\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n \ d\nu : \nu \in {\mathcal M} \right\}= \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n \ d\mu. \end{equation*} In our main result, Theorem \ref{main4}, we prove that under certain assumptions there exists a sequence of Gibbs states $\{\mu_t\}_{t\geq 1}$ corresponding to $t {\mathcal F}$ and that this sequences has an accumulation point $\mu$. Moreover, the measure $\mu$ is maximising for ${\mathcal F}$. We stress that since the space $\Sigma$ is not compact the space $\mathcal{M}$ is not compact either. Therefore, the existence of an accumulation point is far from trivial. A similar problem, in the case of a single function instead of a sequence ${\mathcal F}$, was first studied by Jenkinson, Mauldin and Urba\'nski \cite{jmu} (see also \cite{big,bf,i,ke}).
Note that even though we prove the existence of an accumulation point for the sequence $\{\mu_t\}_{t\geq 1}$ , this does not imply that the limit $\lim_{t \to \infty} \mu_t$ exists. Actually, in the simpler setting of compact sub-shifts of finite type and H\"older potentials, Hochman and Chazottes \cite{hc} constructed an example where there is no convergence. However, under certain finite range assumptions convergence has been proved in \cite{br,l, chgu}.
The thermodynamic formalism needed in the context of almost-additive sequences for countable Markov shifts was developed in \cite{iy}. In particular, the existence of Gibbs states was established in \cite[Theorem 4.1]{iy}. New results in this direction are obtained in Section \ref{sec:gibbs}, where we establish conditions that ensure that certain Gibbs states are actually equilibrium measures (recall that in this non-compact setting this is not always the case, see \cite[p.1757]{s3}).
We stress that our formalism allows us to deal with products of matrices and it is well suited for applications. In particular, in Section \ref{sec:joint}, applications of our results are given to the study of the joint spectral radius of a set of matrices. We construct an invariant measure that realises the joint spectral radius. Moreover, we construct a sequence of Gibbs states that can be used to approximate the value of the joint spectral radius. It should be pointed out that these results are new even in the case of finitely many matrices. In the same spirit, Section \ref{sec:singular} is devoted to another application of our results in the setting of multifractal analysis. For non-conformal dynamical systems defined on the plane, we obtain the upper bound for the Lyapunov spectrum. Moreover, we construct a measure supported on the maximal level set.
\section{Preliminaries} \label{sec:backg} In this section, we give a brief overview of recent results of thermodynamic formalism for almost-additive sequences on countable Markov shifts. We collect results mostly from \cite{iy}.
Let $(\Sigma, \sigma)$ be a one-sided Markov shift over a countable alphabet $S$. This means that there exists a matrix $(t_{ij})_{S \times S}$ of zeros and ones (with no row and no column made entirely of zeros) such that \begin{equation*} \Sigma=\left\{ x\in S^{{\mathbb N}} : t_{x_{i} x_{i+1}}=1 \ \text{for every $i \in {\mathbb N}$}\right\}. \end{equation*} The \emph{shift map} $\sigma:\Sigma \to \Sigma$ is defined by $\sigma(x_1 x_2 \dots)=(x_2 x_3 \dots)$. Sometimes we simply say that $(\Sigma, \sigma)$ is a \emph{countable Markov shift}. The set \begin{equation*}
C_{i_1 \cdots i_{n}}= \left\{x \in \Sigma : x_j=i_j \text{ for } 1 \le j \le n \right\}
\end{equation*}
is called \emph{cylinder} of length $n$. The space $\Sigma$ endowed with the topology generated by cylinder sets is a non-compact space. We denote by ${\mathcal M}$ the set of $\sigma$-invariant Borel probability measures on $\Sigma$. We will always assume $(\Sigma, \sigma)$ to be topologically mixing, that is, for every $a,b \in S$ there exists $N_{ab} \in {\mathbb N}$ such that for every $n > N_{ab}$ we have $C_a \cap \sigma^{-n} C_b \neq \emptyset$.
\begin{defi} \label{aaa} Let $(\Sigma,\sigma)$ be a one-sided countable state Markov shift. For each $n\in {\mathbb N}$, let $f_{n}: \Sigma \to {\mathbb R}^{+}$ be a continuous function. A sequence $\mathcal{F}= \{ \log f_n \}_{n=1}^{\infty}$ on $\Sigma$ is called \emph{almost-additive} if there exists a constant $C\geq0$ such that for every $n,m\in {\mathbb N}, x\in \Sigma$, we have \begin{equation} \label{A1}
f_n(x) f_{m}(\sigma^n x) e^{-C} \leq f_{n+m}(x),
\end{equation} and \begin{equation} \label{A2}
f_{n+m}(x) \leq f_n(x) f_{m}(\sigma^n x) e^{C}.
\end{equation}
\end{defi}
Throughout this paper, we will assume the sequence ${\mathcal F}$ to be almost-additive. We also assume the following regularity condition.
\begin{defi}\label{bowen} Let $(\Sigma, \sigma)$ be a one-sided countable Markov shift. For each $n\in {\mathbb N}$, let $f_{n}: \Sigma \rightarrow {\mathbb R}^{+}$ be continuous. A sequence $\mathcal{F}= \{ \log f_n \}_{n=1}^{\infty}$ on $\Sigma$ is called a \emph{Bowen} sequence if there exists $M \in {\mathbb R}^{+}$ such that \begin{equation}\label{bowenbound}
\sup \{ A_n : n \in \NÊ\} \leq M, \end{equation} where \[A_n= \sup \left\{ \frac{f_n(x)}{f_n(y)} : x,y \in \Sigma, x_i=y_i \textrm{ for } 1 \leq i \leq n\right\}.\] \end{defi}
In \cite{iy} thermodynamic formalism was developed for almost-additive Bowen sequences. The following definition of pressure is a generalisation of the one given by Sarig \cite{s1} to the case of almost-additive sequences.
\begin{defi} \label{def:pre} Let ${\mathcal F}=\{\log f_n\}_{n \in {\mathbb N}}$ be an almost-additive Bowen sequence, the \emph{Gurevich pressure} of ${\mathcal F}$, denoted by $P({\mathcal F})$, is defined by $$P({\mathcal F})=\lim_{n\rightarrow \infty} \frac{1}{n} \log \left(\sum_{\sigma^{n}x=x}f_n(x)\chi_{C_a}(x)\right),$$ where $\chi_{C_a}(x)$ is the characteristic function of the cylinder $C_a$. \end{defi} Let us stress that the limit always exists and does not depend on the choice of the cylinder. Note that if $f:\Sigma \rightarrow {\mathbb R}$ is a continuous function, the sequence of Birkhoff sums of $f$ forms an almost additive sequence (in this case the constant $C$ in definition \ref{aaa} is equal to zero). For every $n \in {\mathbb N}, x\in \Sigma$, define $f_n:\Sigma\rightarrow {\mathbb R}^{+}$ by $f_n(x)=e^{f(x)+f(\sigma x) +\dots+f (\sigma^{n-1}x)}$. Then the sequence $\{\log f_n\}_{n=1}^{\infty}$ is additive. This remark is the link that ties up the thermodynamic formalism for a continuous function with that of sequences of continuous functions. Therefore, the definition of pressure given in definition \ref{def:pre} generalises that of Gurevich pressure given by Sarig \cite{s1}. Also, we note that $\lim_{n \rightarrow \infty}\frac{1}{n} \int \log f_n \ d\mu= \int f \ d\mu$ for any $\mu\in {\mathcal M}$. Therefore, the next theorem is a generalisation of the variational principle for continuous functions to the setting of almost-additive sequence of continuous functions. It was proved in \cite[Theorem 3.1]{iy}. In order to state it we need the following definition, given $f: \Sigma \to {\mathbb R}$ a continuous function, the \emph{transfer operator} $L_{f}$ applied to function $g: \Sigma \rightarrow {\mathbb R}$ is formally defined by \begin{equation*} \label{transfer} \left( L_{f} g \right) (x) := \sum_{\sigma z=x} f(z) g(z) \quad \text{ for every } x\in \Sigma. \end{equation*}
\begin{teo}\label{main1} Let $(\Sigma, \sigma)$ be a topologically mixing countable state Markov shift and ${\mathcal F}=\{ \log f_n \}_{n=1}^{\infty}$ be an almost-additive Bowen sequence on $\Sigma$ with $\vert \vert L_{f_1}1 \vert \vert _{\infty}<\infty$. Then $-\infty<P({\mathcal F})<\infty$ and \begin{align*} P({\mathcal F})&=\sup \left\{ h(\mu) + \lim_{n\to \infty} \frac{1}{n} \int \log f_n \ d\mu : \mu \in \mathcal{M} \textrm{ and } \lim_{n \rightarrow \infty}\frac{1}{n} \int \log f_n \ d\mu \neq -\infty \right\}\\ &=\sup \left\{ h(\mu) + \int \lim_{n\to \infty}\frac{1}{n}\log f_n \ d\mu : \mu \in \mathcal{M} \textrm{ and } \int \lim_{n \rightarrow \infty}\frac{1}{n}\log f_n \ d\mu \neq -\infty \right\}. \end{align*} \end{teo} A measure $\mu \in {\mathcal M}$ is said to be an \emph{equilibrium measure} for ${\mathcal F}$ if \begin{equation*} P({\mathcal F})= h(\mu) + \lim_{n \to \infty} \frac{1}{n} \int \log f_n \ d \mu. \end{equation*}
In \cite{b2,m}, the notion of Gibbs state for continuous functions was extended to the case of almost-additive sequences.
\begin{defi} \label{def-gibbs} Let $(\Sigma, \sigma)$ be a topologically mixing countable state Markov shift and ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ be an almost-additive sequence on $\Sigma$. A measure $\mu \in {\mathcal M}$ is said to be a \emph{Gibbs} state for ${\mathcal F}$ if there exist constants $C_{0}>0$ and $P \in {\mathbb R}$ such that for every $n \in {\mathbb N}$ and every $x \in C_{i_1 \dots i_{n}}$ we have \begin{equation}\label{gibbs} \frac{1}{C_{0}} \leq \frac{\mu(C_{i_1 \dots i_{n}})}{\exp(-nP)f_n(x)} \leq C_{0}. \end{equation}
\end{defi} In the case of a single continuous function defined over a countable Markov shift, there exists a combinatorial obstruction on $\Sigma$ that prevents the existence of Gibbs states (see \cite{s3}). This, of course, is also the case in the setting of almost additive sequences. The combinatorial condition on $\Sigma$ is the following. \begin{defi} \label{BIP} A countable Markov shift $(\Sigma, \sigma)$ is said to satisfy the \emph{big images and preimages property (BIP property)} if there exists $\{ b_{1} , b_{2}, \dots, b_{n} \}$ in the alphabet $S$ such that \begin{equation*} \forall a \in S \textrm{ } \exists i,j \textrm{ such that } t_{b_{i}a}t_{ab_{j}}=1. \end{equation*} \end{defi}
In \cite[Theorem 4.1]{iy}, the existence of Gibbs states for an almost-additive sequence of continuous functions defined over a BIP shift $\Sigma$ was established. Moreover, under a finite entropy assumption it was shown that this Gibbs state is also an equilibrium measure.
\begin{teo}\label{main2} Let $(\Sigma, \sigma)$ be a topologically mixing countable state Markov shift with the BIP property. Let ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ be an almost-additive Bowen sequence defined on $\Sigma$. Assume that $\sum_{a\in S}\sup f_1\vert _{C_a}<\infty$. Then there exits a unique invariant Gibbs state $\mu$ for ${\mathcal F}$ and it is mixing. Moreover, If $h(\mu)<\infty$, then it is the unique equilibrium measure for ${\mathcal F}$. \end{teo}
\begin{rem} \label{rteo1} Note that $\sum_{a\in S}\sup f_1\vert _{C_a}<\infty$ implies that $-\infty<P({\mathcal F})<\infty$. \end{rem}
\section{Existence of Gibbs equilibrium states} \label{sec:gibbs} In Theorem \ref{main2} we established conditions under which the existence of a Gibbs state $\mu$ for an almost-additive sequence ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ is guaranteed. It might happen that $h(\mu)=\infty$ and that $\lim_{n\to \infty} \frac{1}{n} \int \log f_n \ d\mu = -\infty$. In this case, since the sum of these two quantities is meaningless, we don't say that the measure $\mu$ is an equilibrium measure. However, if $h(\mu) < \infty$ then $\mu$ is indeed an equilibrium measure. The purpose of the following section is to establish other conditions that would also imply that the Gibbs state $\mu$ is an equilibrium measure. Our result could be compared with those in \cite{mu}, where a single function (instead of an almost-additive sequence) is studied.
In the rest of the paper, we identify a countable alphabet $S$ with ${\mathbb N}$.
\begin{prop}\label{chara} Let $(\Sigma, \sigma)$ be a topologically mixing countable state Markov shift with the BIP property. Let ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ be an almost-additive Bowen defined on $\Sigma$ satisfying $\sum_{i\in {\mathbb N}} \sup f_1\vert_{C_{i}}< \infty$ and $\mu_{{\mathcal F}}$ be the unique invariant Gibbs state for ${\mathcal F}$. The followings statements are equivalent: \begin{enumerate} \item $h_{\mu_{{\mathcal F}}}(\sigma)<\infty$, \label{1} \item $\lim_{n \rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{{\mathcal F}}>-\infty$, \label{2} \item $\int \log f_1 d\mu_{{\mathcal F}}>-\infty$, \label{3} \item $\sum_{i=1}^{\infty}\sup \{\log f_1(x):x\in C_{i}\}\sup \{f_1(x):x\in C_{i}\}>-\infty$.\label{4} \end{enumerate} Therefore, if one of the above is satisfied, then $\mu_{{\mathcal F}}$ is the unique Gibbs equilibrium state for ${\mathcal F}$. \end{prop}
\begin{proof} Suppose we have (\ref{1}). Then it follows from Theorem \ref{main2} that $\mu_{{\mathcal F}}$ is the unique Gibbs equilibrium state for ${\mathcal F}$. Thus (1) implies (2). Assume now (\ref{2}). Since $\sum_{i\in {\mathbb N}} \sup f_1\vert_{C_i}< \infty$ implies $-\infty <P({\mathcal F})<\infty$, it is a direct consequence of the variational principle that $h_{\mu_{{\mathcal F}}}(\sigma)<\infty$. Thus (2) implies (1). Next we show that (\ref{2}) if and only if (\ref{3}). It directly follows from Definition \ref{aaa} (equations (\ref{A1}) and (\ref{A2})), that \begin{equation*} -\frac{n-1}{n}C+\int \log f_{1}(x)d\mu_{{\mathcal F}}\leq \frac{1}{n}\int \log f_n d\mu_{{\mathcal F}}\leq \frac{n-1}{n}C+ \int \log f_1 d\mu_{{\mathcal F}}. \end{equation*} Letting $n\rightarrow\infty$, we obtain the result. Now we show that (\ref{3}) if and only if (\ref{4}). Fix $i\in {\mathbb N}$. Since ${\mathcal F}$ is a Bowen sequence on $\Sigma$, there exists $M$ such that \begin{equation}\label{bounv} \sup\left\{\frac{f_1(x)}{f_1(y)}:x_1=y_1=i \right \}\leq M. \end{equation} This implies that $-\log M\leq \log f_{1}(x)-\log f_{1}(y)\leq \log M$ for any $x, y \in C_i$. Therefore, \begin{equation*}
\sup\left\{\log \frac{f_{1}(x)}{M}:x\in C_i \right\}\leq \inf \left\{\log f_1(x):x\in C_i \right\}. \end{equation*} Let $x_{i}$ be an arbitrary point in $C_i$ , then \begin{align*} \int \log f_1 d\mu_{{\mathcal F}}&= \sum_{i=1}^{\infty} \int_{C_i} \log f_1 d\mu_{{\mathcal F}}\\ &\geq \sum_{i=1}^{\infty} \inf \left\{\log f_1(x):x\in C_i \right\}e^{-P({\mathcal F})}\frac{f_1(x_i)}{C_0}\\ &\geq \sum_{i=1}^{\infty} \sup \left\{\log \frac{f_1(x)}{M}:x\in C_i \right\} e^{-P({\mathcal F})}\frac{f_1(x_i)}{C_0}, \end{align*} for some $C_{0}>0$, where in the second inequality we used the definition of Gibbs sate. Using (\ref{bounv}), we have \begin{equation*} \int \log f_1 d\mu_{{\mathcal F}}\geq \frac{e^{-P({\mathcal F})}}{C_0 M}\sum_{i=1}^{\infty} \sup \left\{\log \frac{f_1(x)}{M}:x\in C_i \right\} \sup\{f_1(x):x\in C_i\}. \end{equation*} Therefore, (\ref{4}) implies (\ref{3}). To see that (\ref{3}) implies (\ref{4}), consider \begin{align*}
\int \log f_1 d\mu_{{\mathcal F}}&=\sum_{i=1}^{\infty}\int _{C_i}\log f_1 d\mu_{{\mathcal F}}\\ &\leq e^{-P({\mathcal F})}C_0\sum_{i=1}\sup \{\log f_1 (x):x\in C_i\}f_1(x_i)\\ &\leq e^{-P({\mathcal F})}C_0\sum_{i=1}\sup \{\log f_1(x):x\in C_i\}\sup\{f_1(x):x\in C_i\}. \end{align*} Therefore, the desired result is obtained. \end{proof}
The rest of the section is devoted to study the relation between the Gibbs equilibrium state for $\log f_1$ on $\Sigma$ and that for ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ on $\Sigma$. We begin establishing conditions on a continuous function $\log f_1$ so that it has a Gibbs equilibrium state (see \cite{mu, s1,s3} for related work).
\begin{prop}\label{general} Let $(\Sigma, \sigma)$ be a topologically mixing countable Markov shift with the BIP property. Suppose that $\log f_1$ is a continuous function on $\Sigma$ satisfying: \begin{enumerate} \item $\sup_{n\in {\mathbb N}}\left\{\frac{f_1(x)f_1(\sigma x)\dots f_1(\sigma^{n-1}x)}{f_1(y)f_1(\sigma x)\dots f_1(\sigma^{n-1}y)}: x_{i}=y_{i}, 1\leq i\leq n \right\}<\infty$, \label{pp1} \item $\sum_{i\in {\mathbb N}} e^{\sup\log f_1\vert_{C_i}}<\infty$. \label{pp2} \end{enumerate} Then there exists a unique invariant Gibbs state $\mu_{\log f_1}$ for $\log f_1$. Moreover, \begin{enumerate} \item If $\int \log f_1 d\mu_{\log f_1}>-\infty$, then $\mu_{\log f_1}$ is the unique Gibbs equilibrium state for $\log f_1$. \item $\int \log f_1 d\mu_{\log f_1}>-\infty$ if and only if $\sum_{i=1}^{\infty}\sup \{\log f_1(x):x\in C_{i}\}\sup \{f_1(x):x\in C_{i}\}>-\infty$. \end{enumerate} \end{prop}
\begin{proof} Let $g(x)=\log f_1(x)$ and $g_n(x)= e^{g(x)+g(\sigma x)+\dots+g(\sigma^{n-1}x)}$. Define $\Phi=\{\log g_n\}_{n=1}^{\infty}$. We have that $\Phi$ is an additive Bowen sequence. Indeed, we only need to prove that $\Phi$ is a Bowen sequence. In order to do so, we note that for each $n\in{\mathbb N}$ we have \begin{eqnarray*} \sup \left\{\frac{g_n(x)}{g_n(y)}: x_i=y_i, 1\leq i \leq n \right\} &=& \\ \sup \left\{\frac{f_1(x)f_1(\sigma x)\dots f_1(\sigma^{n-1}x)}{f_1(y)f_1(\sigma x)\dots f_1(\sigma^{n-1}y)}: x_i=y_i, 1\leq i\leq n \right\}.\end{eqnarray*}
The claim now follows from assumption (\ref{pp1}). It is a direct consequence of assumption (\ref{pp2}) that $-\infty<P(\Phi)<\infty$. Therefore, by Theorem \ref{main2}, there exists a unique invariant Gibbs state $\mu_{\Phi}$ for $\Phi$. Since $P(\Phi)=P(\log f_1)$ we have that $\mu_{\Phi}$ is the unique invariant Gibbs state for $\log f_1$. Moreover, since for any $\mu\in {\mathcal M}$ we have that $\lim_{n\rightarrow \infty}\frac{1}{n}\int \log g_n d\mu=\int \log f_1 d\mu$, then if $\int \log f_1 d\mu_{\Phi}>-\infty$, it follows that $\mu_{\Phi}$ is the unique Gibbs equilibrium state for $\log f_1$.
This proves the first part of the Proposition. The second claim is proved in the exact same way as the corresponding one in Proposition \ref{chara}.
\end{proof}
\begin{coro}\label{logf1} Let $(\Sigma, \sigma)$ be a topologically mixing countable Markov shift with the BIP property. Suppose that the function $\log f_1:\Sigma \to {\mathbb R}$ is of summable variation and satisfies $\sum_{i\in {\mathbb N}} e^{\sup\log f_1\vert_{C_i}}<\infty$. Then, \begin{equation*} \sup_{n\in {\mathbb N}}\left\{\frac{f_1(x)f_1(\sigma x)\dots f_1(\sigma^{n-1}x)}{f_1(y)f_1(\sigma x)\dots f_1(\sigma^{n-1}y)}: x_i=y_i, 1\leq i\leq n \right\}<\infty \end{equation*} and
\begin{enumerate} \item If $\int \log f_1 d\mu_{\log f_1}>-\infty$, then $\mu_{\log f_1}$ is the unique Gibbs equilibrium state for $\log f_1$. \item $\int \log f_1 d\mu_{\log f_1}>-\infty$ if and only if $\sum_{i=1}^{\infty}\sup \{\log f_1(x):x\in C_{i}\}\sup \{f_1(x):x\in C_{i}\}>-\infty$. \end{enumerate}
\end{coro}
\begin{proof}
Since $\log f_1$ is summable variation, there exists $N \in {\mathbb R}$ such that $$\sum_{n=1}^{\infty}\sup\left\{\left\vert \log \frac{f_1(x)}{f_1(y)} \right\vert : x_i=y_i, 1\leq i\leq n \right\}\leq N.$$ Therefore, for $x, y\in \Sigma$ such that $x_i=y_i$, $1\leq i\leq n$ we have that \begin{eqnarray*}
\log \frac{f_1(x)f_1(\sigma x)\dots f_1(\sigma^{n-1}x)}{f_1(y)f_1(\sigma y)\dots f_1(\sigma^{n-1}y)} &\leq&\\
\sup \left\{\log \frac{f_1(x)}{f_1(y)}:x_i=y_i, 1\leq i\leq n \right\} &+&\\ \sup \left\{\log \frac{f_1(\sigma x)}{f_1(\sigma y)}:(\sigma x)_i =(\sigma y)_i, 1\leq i\leq n-1 \right\}+ &\dots& \\ +\dots+\sup\left\{\log \frac{f_1(\sigma^{n-1} x)}{f_1(\sigma^{n-1} y)}:(\sigma^{n-1} x)_i=(\sigma^{n-1} y)_i, i=1 \right\} &\leq N. \end{eqnarray*} Then $$\sup_{n\in {\mathbb N}}\left\{\frac{f_1(x)f_1(\sigma x)\dots f_1(\sigma^{n-1}x)}{f_1(y)f_1(\sigma y) \dots f_1(\sigma^{n-1}y)}: x_i=y_i, 1\leq i\leq n \right\} \leq e^{N},$$ and the result follows by Proposition \ref{general}. \end{proof}
The next theorem characterises the existence of a Gibbs equilibrium state for ${\mathcal F}$ in terms of the existence of the Gibbs equilibrium measure for $\log f_1$.
\begin{teo} \label{main} Let $\Sigma$ be a topologically mixing countable Markov shift with the BIP property. Let ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ be an almost-additive Bowen sequence on $\Sigma$ satisfying $\sum_{i\in {\mathbb N}} \sup f_1\vert_{C_i}< \infty$ and \begin{equation*} \sup_{n\in {\mathbb N}}\left\{\frac{f_1(x)f_1(\sigma x)\dots f_1(\sigma^{n-1}x)}{f_1(y)f_1(\sigma x)\dots f_1(\sigma^{n-1}y)}: x_i=y_i, 1\leq i\leq n \right\}<\infty. \end{equation*} Then ${\mathcal F}$ has a unique Gibbs equilibrium state $\mu_{{\mathcal F}}$ for ${\mathcal F}$ if and only if $\log f_1$ has a unique Gibbs equilibrium state $\mu_{\log f_1}$ for $\log f_1$. \end{teo}
\begin{proof} To begin with, note that $P(\log f_1) <\infty$ and that the assumptions made on $f_1$ together with Proposition \ref{general} imply that there exists a Gibbs state $\mu_{\log f_1}$ for $\log f_1$.
Let us first assume that there exists a unique Gibbs equilibrium state, $\mu_{{\mathcal F}}$, for ${\mathcal F}$ and prove that $\mu_{\log f_1}$ is an equilibrium measure for $\log f_1$. Since $\mu_{{\mathcal F}}$ is a Gibbs equilibrium state we have that $\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{{\mathcal F}}>-\infty$. Proposition \ref{chara} then implies that
$$\sum_{i=1}^{\infty}\sup \{\log f_1(x):x\in C_{i}\}\sup \{f_1(x):x\in C_{i}\}>-\infty.$$ Thus, there exist constants $C_0', M' \in {\mathbb R}^+$ such that \begin{equation*} \int \log f_1 d\mu_{\log f_1}\geq \frac{e^{-P(\log f_1)}}{C_0'M'}\sum_{i=1}^{\infty} \sup \left\{\log \frac{f_1(x)}{M'}:x\in C_i \right\} \sup\{f_1(x):x\in C_i\}. \end{equation*} Therefore, $\int \log f_1 d\mu_{\log f_1}>-\infty$ and we can conclude that $\mu_{\log f_1}$ is the unique Gibbs equilibrium state for $\log f_1$.
Conversely, suppose $\log f_1$ has a unique Gibbs equilibrium state $\mu_{\log f_1}$. Then $\int \log f_1 d\mu_{\log f_1}>-\infty$ and Proposition \ref{general} implies that
$$\sum_{i=1}^{\infty}\sup \{\log f_1(x):x\in C_{i}\}\sup \{f_1(x):x\in C_{i}\}>-\infty.$$ Therefore, applying Theorem \ref{main2} and Proposition \ref{chara} we obtain that $\mu_{{\mathcal F}}$ is the unique Gibbs equilibrium state for ${\mathcal F}$. \end{proof}
\begin{rem} A similar result to that in Corollary \ref{logf1} but under different regularity assumptions was proved in \cite{mu}. \end{rem}
\begin{coro} Let $\Sigma$ be a topologically mixing countable Markov shift with the BIP property. Let ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ be an almost-additive Bowen sequence on $\Sigma$, where $\log f_1$ is a function of summable variation that satisfies $\sum_{i\in {\mathbb N}} \sup f_1\vert_{C_i}< \infty$. Then ${\mathcal F}$ has a unique Gibbs equilibrium state $\mu_{{\mathcal F}}$ for ${\mathcal F}$ if and only if $\log f_1$ has a unique Gibbs equilibrium state $\mu_{\log f_1}$ for $\log f_1$. \end{coro} \begin{proof} The result immediately follows from Proposition \ref{general} and Theorem\ref{main}. \end{proof}
\section{Zero temperature limits of Gibbs equilibrium states}\label{zero} This section is devoted to state and prove our main result. We prove that for a certain class of almost-additive potentials we can associate a family of Gibbs equilibrium states and that this sequence has, at least, one accumulation point. It turns out that this measure is a maximising one. This result generalises the zero temperature limit theorems obtained for a single function in the compact (see \cite[Section 4]{j1} ) and in the non-compact settings (see \cite{jmu,big,bf,i,ke}). The major difficulty we have to face in this context is that the space of invariant probability measures is not compact, hence the existence of an accumulation point is far from trivial. The techniques we use in the proof are inspired in results by Jenkinson, Mauldin and Urba\'nski \cite{jmu}. We begin defining the class of sequence of potentials that we will be interested in.
\begin{defi}\label{property} Let $(\Sigma, \sigma)$ be a countable Markov shift satisfying the BIP property. A sequence of continuous functions ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$, with $f_i:\Sigma \to {\mathbb R}$, $i\in{\mathbb N}$, belongs to the class $\mathcal{R}$ if it satisfies the following properties: \begin{enumerate} \item The sequence ${\mathcal F}$ is almost-additive, Bowen and $\sum_{i\in \mathbb{N}} \sup f_1\vert_{C_i}<\infty$. \label{a1} \item $\sum_{i=1}^{\infty}\sup \{\log f_1(x):x\in C_i\}\sup \{f_1(x):x\in C_i\}>-\infty$. \label{a2} \end{enumerate} \end{defi}
\begin{rem} Note that it follows from Theorem \ref{main2} and Proposition \ref{chara} that if $\mathcal{F} \in \mathcal{R}$ then there exists a unique invariant Gibbs equilibrium state $\mu_{{\mathcal F}}$ for ${\mathcal F}$. Similarly, for every $t\geq 1$, there exists a unique invariant Gibbs equilibrium state $\mu_{t{\mathcal F}}$ for $t{\mathcal F}$. \end{rem}
We begin proving the upper semi-continuity of the limit of the integrals. This is an essential result and it holds under weaker assumptions than those considered in the definition of the class $\mathcal{R}$.
\begin{lema}\label{lema1} Let $(\Sigma, \sigma)$ be a countable Markov shift and ${\mathcal F}=\{\log f_n\}_{n=1}^{\infty}$ an almost-additive sequence of continuous functions on $\Sigma$ with $\sup f_1<\infty$. Then the map $m: {\mathcal M} \rightarrow \mathbb{R}$ defined by $m(\mu)=\lim_{n\rightarrow \infty} \frac{1}{n}\int \log f_n d\mu$ is upper semi continuous. \end{lema}
\begin{proof} We use an argument similar to that in the proof of \cite[Proposition 3.7]{Y}. Let $\{\mu_{i}\}_{i\geq 1}$ be a sequence of measures in ${\mathcal M}$ which converge to the measure $\mu\in {\mathcal M}$ in the weak-star topology.
Let $M_1 \in {\mathbb R}$ be such that $\sup f_1\leq M_1$ and $C \in {\mathbb R}$ the almost-additive constant that appear in equations (\ref{A1}) and (\ref{A2}) in Definition \ref{aaa}. Let $g_n(x)=f_n(x)e^C$ and note that $\{\log g_n\}_{n=1}^{\infty}$ is a sub-additive sequence. Moreover, $(\log g_1(x))^{+}\leq \log M_1 +\log e^C$ and so $(\log g_1)^{+} \in L_{1}(\mu_i)$ for each $i\geq 1$. Also, for each fixed $n\in {\mathbb N}$ we have that $(\log g_n(x))^+\leq n \log M_1 +nC<\infty$
which implies that for $i\geq 1$ we have $(\log g_n)^{+} \in L_{1}(\mu_i)$. The sub-additive ergodic theorem (see \cite[Theorem 10.1]{w2}) implies that for each fixed $\mu_i \in {\mathcal M}$ and $k \in {\mathbb N}$ we have, \begin{equation*}
\lim_{n\rightarrow \infty}\frac{1}{n}\int \log g_n d\mu_i\leq \frac{1}{k}\int \log g_k d\mu_i. \end{equation*} Thus, \begin{equation*}
\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_i\leq \frac{1}{k}\int \log f_k d\mu_i +\frac{C}{k}. \end{equation*} Therefore, \begin{equation*} \limsup_{i\rightarrow \infty}\lim_{n\rightarrow \infty} \frac{1}{n}\int \log f_n d{\mu}_i\leq \frac{1}{k} \int \log f_k d\mu+\frac{C}{k}. \end{equation*} Letting $k \rightarrow \infty$, we obtain $$\limsup_{i\rightarrow \infty}\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_i\leq \lim_{k\rightarrow \infty}\frac{1}{k} \int \log f_k d\mu,$$ which shows that $\limsup_{i\rightarrow \infty}m(\mu_i) \leq m(\mu)$, concluding the proof.
\end{proof}
In our next lemma, we consider a one parameter family of Gibbs states $\mu_{t{\mathcal F}}$ corresponding to a family $t{\mathcal F}$. We show how do the constants in the Gibbs property depend upon the parameter $t$. The proof of the lemma strongly uses the combinatorial assumptions we made on the system and an approximation argument we now describe. Since $(\Sigma, \sigma)$ is topologically mixing and has the BIP property, there exist $k\in {\mathbb N}$ and a finite collection $W$ of admissible words of length $k$ such that for any $a, b\in S$, there exists $w\in W$ such that $awb$ is admissible (see \cite[p.1752]{s3} and \cite{mu}). Denote by $A$ the transition matrix for $\Sigma$. It is known (see \cite{iy,s1}) that rearranging the set ${\mathbb N}$, there is an increasing sequence $\{l_{n}\}_{n=1}^{\infty}$ such that the matrix $A\vert_{ \{1, \dots, l_{n}\}\times \{1, \dots, l_{n}\}}$ is primitive (for a definition of primitive see \cite[p.5]{mu2}). Let $Y_{l_n}$ be the topologically mixing finite state Markov shift with the transition matrix $A\vert_{ \{1, \dots, l_{n}\}\times \{1, \dots, l_{n}\}}$. Then there exists $p\in {\mathbb N}$ such that for all $n\geq p$, the set $Y_{l_n}$ contains all admissible words in $W$. We denote by $B_n(Y_l)$ the set of admissible words of length $n$ in $Y_l$. Given $w\in W$, we let $N_{w}=\sup\{f_k(z):z\in C_w\}$ and $\bar{N}=\min\{N_{w}: w\in W\}$. The proof of the next lemma makes use of ideas from \cite [Claim 4.1]{iy}.
\begin{lema}\label{key} Let $(\Sigma, \sigma)$ be a countable Markov shift with the BIP property and ${\mathcal F} \in \mathcal{R}$. Let $t \geq 1$ and $\mu_{t{\mathcal F}}$ be the unique Gibbs equilibrium state for $t{\mathcal F}$. Then, for every $n \in {\mathbb N}$ and for all $ x\in C_{i_1 \dots i_n}$, we have \begin{equation*}
\frac{\mu_{t{\mathcal F}}(C_{i_1 \dots i_n})}{e^{-nP(t{\mathcal F})}f^{t}_n(x)}\leq \left(\frac{Me^{6C}}{D^5} \right)^t, \end{equation*} where \begin{equation*} D=\frac{\bar{N}e^{-3C}}{M^3 e^{(k-1)C} \max\{\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i}, (\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i})^k\}}. \end{equation*} \end{lema}
\begin{proof} Fix $t\in {\mathbb N}$ and $Y_{l_m}$ with $m\geq p$. In order to simplify the notation we denote $Y_{l_m}$ by $Y$. Recall that $Y$ is a compact set. Define $\alpha^{Y}_{n,t}:=\sum_{i_1\cdots i_{n}\in B_{n}(Y)}\sup\{f^{t}_n\vert _Y(y): y\in C_{i_1\dots i_{n}}\}$. For $l \in {\mathbb N}$, let $\nu_{l,t}$ be the Borel probability measure on $Y$ defined by \begin{equation*} \nu_{l,t}(C_{i_1\dots i_{l}})=\frac{\sup\{f^{t}_{l}\vert_{Y}(y):y\in C_{i_1\dots i_{l}}\}}{\alpha^{Y}_{l,t}}. \end{equation*} Let $a_{i_1\dots i_l}:=\sup\{f^{t}_{l}\vert_{Y}(y):y\in C_{i_1\dots i_{l}}\}$. Let $l, n\in{\mathbb N}, l>n$. It is easy to see that \begin{equation}\label{p4} \alpha^{Y}_{l, t}\leq e^{Ct}\alpha^{Y}_{n, t}\alpha^Y_{l-n, t}. \end{equation} Now let $l,n\in {\mathbb N}, l>k$. Using the same arguments used to prove equations (15), (16), (17) of \cite[Claim 4.1]{iy}, for each fixed $i_1\dots i_n\in B_n(Y)$, we have \begin{equation}\label{p1} \sum_{t_1\dots t_l} \sup\{f^{t}_{n+l}\vert _Y(y):y\in C_{i_1\dots i_n t_1\dots t_l}\}\geq \frac{\bar{N}^t e^{-2Ct}}{M^{3t}}a_{i_1\dots i_n} \alpha^Y_{l-k,t}. \end{equation} Thus we obtain \begin{equation} \label{p2} \alpha^{Y}_{n+l, t}\geq \frac{\bar{N}^te^{-3Ct}\alpha^{Y}_{n, t}\alpha^{Y}_{l, t}}{M^{3t}\alpha^{Y}_{k,t}}. \end{equation} Also, from the proof of \cite[Claim 4.1]{iy}, we have that \begin{equation}\label{p3} \alpha^{Y}_{k,t} \leq e^{(k-1)Ct}\left(\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i}\right)^{tk}. \end{equation}
Using (\ref{p1}), (\ref{p2}) and (\ref{p3}), we obtain for $l,n \in{\mathbb N},l>k$ \begin{equation}\label{p5} \alpha^{Y}_{n+l, t}\geq D^{t}\alpha^{Y}_{n, t}\alpha^{Y}_{l,t}. \end{equation} We now claim that (\ref{p5}) holds for $n\in{\mathbb N}, 1\leq l\leq k$. Let $1\leq r\leq k$. Then \begin{equation*} \alpha^{Y}_{n+r, t}\geq e^{-Ct}\sum_{i_1\dots i_{n+r}\in B_{n+r}(Y)}\sup\{f^{t}_n\vert_Y (y) f^t_{r}\vert _Y (\sigma^n y): y\in C_{i_1\dots i_{n+r}}\}. \end{equation*} Let $i_1\dots i_n\in B_n(Y)$ and $u$ a fixed admissible word of $\Sigma$. Then we can connect $i_1\dots i_n$ and $u$ by an admissible word $w=w_1\dots w_k\in W$ of length $k$ such that $i_1\dots i_nw_1\dots w_ku$ is allowable. Now \begin{equation*} \sum_{i_1\dots i_{n+r}\in B_{n+r}(Y)}\sup\{f^{t}_n\vert_Y (y) f^t_{r}\vert _Y (\sigma^n y):
y\in C_{i_1\dots i_{n+r}}\}\geq \sum_{\substack{\bar y\in C_{i_1\dots i_nw_1\dots w_r}\\ i_1\dots i_n\in B_{n}(Y)}}f^{t}_n\vert_Y (\bar y) f^t_{r}\vert _Y (\sigma^n \bar y), \end{equation*} where $w_1\dots w_{r}\dots w_k\in W$ is chosen for each $i_1\dots i_n\in B_n(Y)$ as explained in the preceding paragraph and $\bar y$ is any point from $C_{i_1\dots i_nw_1\dots w_r}$. Therefore, \begin{equation*} \sum_{\substack{\bar y\in C_{i_1\dots i_nw_1\dots w_r}\\i_1\dots i_n\in B_n(Y)}}f^{t}_n\vert_Y (\bar y) f^t_{r}\vert _Y (\sigma^n \bar y) \geq (\frac{\bar N}{M})^t \sum_{\substack{\bar y\in C_{i_1\dots i_n}\\i_1\dots i_n\in B_n(Y)}}f^{t}_n\vert_Y (\bar y)\geq (\frac{\bar N}{M^2})^t \alpha^{Y}_{n,t}, \end{equation*} where for both inequalities we use the fact that ${\mathcal F}$ has bounded variation. Noting that \begin{equation*} \alpha^{Y}_{r,t}\leq e^{(r-1)Ct}(\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i})^{tr}\leq e^{(k-1)Ct} (\max\{\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i}, (\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i})^k\})^t , \end{equation*} we obtain \begin{equation*} \frac{\alpha^{Y}_{n+r,t}}{\alpha^{Y}_{n,t}\alpha^{Y}_{r,t}}\geq (\frac{\bar N e^{-C}}{M^2e^{(k-1)C}
\max\{\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i}, (\sum_{i\in {\mathbb N}}\sup f_1\vert_{C_i})^k\}})^t \geq D^t \end{equation*} Now we proved the claim. Using the arguments in the proof of \cite[Claim 4.1]{iy}, for all $n\in {\mathbb N}$, we obtain \begin{equation} \label{p6} D^t\alpha^Y_{n, t}\leq e^{nP(t{\mathcal F}\vert_Y)}\leq e^{Ct}\alpha^{Y}_{n,t}. \end{equation} Now, let $l> n+k$. For each fixed $i_1\dots i_n\in B_n(Y)$, we have \begin{align*} & \nu_{l,t}(C_{i_1\dots i_n})\\ &\leq \sum_{j_1\dots j_{l-n}} \frac{\sup\{f^t_{l}\vert _Y (y): y\in C_{i_1\dots i_nj_1\dots j_{l-n}}\}}{\alpha^{Y}_{l, t}}\\ &\leq \sum_{j_1\dots j_{l-n}} \frac{e^{Ct}\sup\{f^{t}_n\vert_Y (y) f^t_{l-n}\vert _Y (\sigma^n y):
y\in C_{i_1\dots i_nj_1\dots
j_{l-n}}\}}{\alpha^Y_{l,t}}\\ & \leq e^{Ct}\sum_{j_1\dots j_{l-n}} \frac{\sup\{f^{t}_n\vert_Y (y) :
y\in C_{i_1\dots i_n}\} \sup\{f^{t}_{l-n}\vert_Y (\sigma^n y) :
y\in C_{i_1\dots i_n j_1\dots j_{l-n}}\}}{\alpha^Y_{l,t}}\\ &\leq \frac{e^{Ct}}{\alpha^Y_{l,t}}\sup\{f^{t}_n\vert_Y (y) :
y\in C_{i_1\dots i_n}\}\sum_{j_1\dots j_{l-n}} \sup\{f^{t}_{l-n}\vert_Y (y) :
y\in C_{j_1\dots \dots j_{l-n}}\}\\ &\leq \frac{e^{Ct}}{D^t \alpha^{Y}_{n,t}}\sup\{f^t_n\vert_Y(y):y\in C_{i_1\dots i_n}\} \text{ (by } (\ref{p5}))\\ &\leq \frac{e^{2Ct}e^{-nP(t{\mathcal F}\vert_Y)}}{D^t}\sup\{f^t_n\vert_Y(y):y\in C_{i_1\dots i_n}\} \text{ (by } (\ref{p6})). \end{align*} Therefore, we obtain \begin{equation}\label{p7} \frac{\nu_{l,t}(C_{i_1\dots i_n})}{a_{i_1\dots i_n}e^{-nP(t{\mathcal F}\vert_Y)}} \leq \frac{e^{2Ct}}{D^t}. \end{equation} Using the property of bounded variation, for all $y\in C_{i_1\dots i_n}$, we have \begin{equation}\label{p11} \frac{\nu_{l,t}(C_{i_1\dots i_n})}{f^{t}_n\vert_Y(y)e^{-nP(t{\mathcal F}\vert_Y)}} \leq \frac{e^{2Ct}M^t}{D^t}. \end{equation} Also, \begin{align*} &\nu_{l,t}(C_{i_1\dots i_n})\geq \frac{\bar N^{t}e^{-2Ct}} {\alpha^Y_{l, t}M^{3t}}a_{i_1\dots i_n}\alpha^{Y}_{l-n-k,t} \text{ (by } (\ref{p1})) \geq \frac{ \bar N^{t}e^{-2Ct}a_{i_1\dots i_n}\alpha^{Y}_{l-n,t}} {\alpha^Y_{l,t} \alpha^Y_{k,t}M^{3t}e^{Ct}} \text { (by } (\ref{p4}))\\ &\geq \frac{D^t a_{i_1\dots i_n}}{e^{Ct}\alpha^{Y}_{n,t}} \text { (by } (\ref{p4}) \text{ and } (\ref{p3})) \geq \frac{D^{2t}}{e^{Ct}}a_{i_1\dots i_n}e^{-nP(t{\mathcal F}\vert_Y)} \text{ (by } (\ref{p6})). \end{align*} Thus \begin{equation}\label{p10} \frac{\nu_{l,t}(C_{i_1\dots i_n})}{a_{i_1\dots i_n}e^{-nP(t{\mathcal F}\vert _Y)}}\geq \frac{D^{2t}}{e^{Ct}}. \end{equation} Hence we obtain for each $y\in C_{i_1\dots i_n}, l>n+k$, \begin{equation}\label{p8} \frac{D^{2t}}{e^{Ct}}\leq \frac{\nu_{l,t}(C_{i_1\dots i_n})} {f^t_n\vert _Y(y)e^{-nP(t{\mathcal F}\vert_Y)}}\leq \frac{e^{2Ct}M^t}{D^t}. \end{equation} Consider now a convergent subsequence $\{\nu_{l_k,t}\}_{k=1}^{\infty}$ of $\{\nu_{l,t}\}_{l=1}^{\infty}$ and let $\nu_t$ be the corresponding limit point. Then $\nu_t$ also satisfies (\ref{p7}),(\ref{p11}), (\ref{p10}) and (\ref{p8}). We know by \cite[Lemma 2]{b2} that $\nu_t$ is ergodic. Using the arguments in \cite{b2}, we construct $\sigma$-invariant ergodic Gibbs measure $\mu_{t{\mathcal F}\vert_Y}$
for $t{\mathcal F}\vert_Y$. A limit point of the sequence $\{\frac{1}{n}\sum_{l=0}^{n-1}{\nu_t}\circ \sigma^{-l}\}_{n=1}^{\infty}$ is the unique equilibrium state for $t{\mathcal F}\vert_Y$ which is also Gibbs (see \cite{iy}). Let $i_1\dots i_n\in B_n(Y)$ be fixed. For $l>k$, \begin{align*} &\nu_t(\sigma^{-l}(C_{i_1\dots i_n}))\\ &\leq \frac{e^{2Ct}}{D^{t}}\sum_{j_1\dots j_{l} i_{1}\dots i_n} \sup\{f^t_{l+n}\vert_Y(y): y\in C_{j_1\dots j_l i_{i}\dots i_{n}}\}e^{-(l+n)P(t{\mathcal F}\vert _Y)} \text{ (replacing } \nu_{l,t} \text{ by } \nu_t \text{ in } (\ref{p7}))\\ &\leq \frac{e^{3Ct}}{D^t} \sum_{j_1\dots j_l}\sup\{f^t_{l}(y)f^t_n(\sigma^{l}y): y\in C_{j_1\dots j_{l}i_1\dots i_n}\} e^{-(l+n)P(t{\mathcal F}\vert _Y)} \\ &\leq \frac{e^{3Ct}}{D^t}\sup\{f^t_n(y): y\in C_{i_1\dots i_n}\}\alpha^Y_{l,t} e^{-(l+n)P(t{\mathcal F}\vert _Y)}\\ &\leq \frac{e^{3Ct}}{D^{2t}}\sup\{f^t_n(y): y\in C_{i_1\dots i_n}\}e^{-nP(tF\vert_Y)} \text{ (by } (\ref{p6})). \end{align*} Therefore, using (\ref{p10}) (replacing $\nu_{l,t}$ by $\nu_t$), we have \begin{equation*} \nu_t(\sigma^{-l}(C_{i_1\dots i_n}))\leq \frac{e^{4Ct}}{D^{4t}}\nu_t(C_{i_1\dots i_n}). \end{equation*} Using (\ref{p8}) (replacing $\nu_{l,t}$ by $\nu_t$), for all $y\in C_{i_1\dots i_n}$, \begin{equation*} \frac{1}{m} \sum_{l=0}^{m-1}\nu_t(\sigma^{-l}(C_{i_1\dots i_n})) \leq \frac{m-k}{m}(\frac{e^{6Ct}M^t f^t_n\vert_Y(y) e^{-nP(t{\mathcal F}\vert_Y)}}{D^{5t}})+\frac{k}{m} \end{equation*} and hence for all $y\in C_{i_1\dots i_n}$, \begin{equation}\label{p9} \mu_{t{\mathcal F}\vert_Y}(C_{i_1\dots i_n}) \leq \frac{e^{6Ct}M^t f^t_n\vert_Y(y) e^{-nP(t{\mathcal F}\vert_Y)}}{D^{5t}}. \end{equation} Therefore, for each fixed $l_m, m\geq p$, $t{\mathcal F}\vert_{Y_{l_m}}$ has a unique equilibrium state $\mu_{t{\mathcal F}\vert_{Y_{l_m}}}$ which is Gibbs and satisfies (\ref{p9}) (replacing $\mu_{t{\mathcal F}\vert_Y}$ by $\mu_{t{\mathcal F}\vert_{Y_{l_m}}}$). The proof of \cite[Theorem 4.1]{iy} shows that (\ref{p9}) holds when we replace $\mu_{t{\mathcal F}\vert_Y}$ and $f^{t}_n\vert_Y(y)$ by the unique Gibbs equilibrium state $\mu_{t{\mathcal F}}$ for $t{\mathcal F}$ and $f^{t}_n(y)$ respectively. This proves the lemma. \end{proof}
\begin{lema}\label{L1} Let $(\Sigma, \sigma)$ be a countable Markov shift with the BIP property and ${\mathcal F} \in \mathcal{R}$. Then the family of Gibbs equilibrium states $\{\mu_{t{\mathcal F}}\}_{t\geq 1}$ is tight, i.e., for all $\epsilon>0$, there exists a compact set $K\subset \Sigma$ such that for all $t\geq 1$ we have $\mu_{t{\mathcal F}}(K)>1-\epsilon$ . \end{lema}
\begin{proof} The proof is based on \cite[Lemma 2]{jmu}. Let $\epsilon>0$. We construct an increasing sequence of positive integers $\{n_k\}_{k=1}^{\infty}$ such that the compact set $$K=\{x\in \Sigma: 1\leq x_k\leq n_k, \text { for all } k\in \mathbb{N}\}$$
satisfies $\mu_{t{\mathcal F}}(K)>1-\epsilon$ for all $t\geq 1$. Let $\pi_{k}: \Sigma \rightarrow \mathbb{N}$ be the projection map onto the $k$-th coordinate. Note that \begin{align*} \mu_{t{\mathcal F}}(K)=&\mu_{t{\mathcal F}} \left( \Sigma \cap (\cup_{k=1}^{\infty} \{x\in \Sigma :x_k>n_k\})^{c}\right)\\ &\geq 1-\sum_{k=1}^{\infty}\mu_{t{\mathcal F}}(\{x\in \Sigma :x_k>n_k\})\\ &= 1-\sum_{k=1}^{\infty}\sum_{i=n_k+1}^{\infty}\mu_{t{\mathcal F}}(\pi_{k}^{-1}(i))\\ &= 1-\sum_{k=1}^{\infty}\sum_{i=n_k+1}^{\infty}\mu_{t{\mathcal F}}(C_i). \end{align*} Therefore, in order to show that $\{\mu_{t{\mathcal F}}\}_{t\geq1}$ is tight, it is enough to find $\{n_k\}_{k=1}^{\infty}$ such that \begin{equation}\label{e1} \sum_{i={n_k}+1}^{\infty}\mu_{t{\mathcal F}}(C_i)<\frac{\epsilon}{2^k}, \text{ for all } k\in \mathbb{N}, t\geq 1. \end{equation} Now, let $N={Me^{6C}}/{D^5}$ in Lemma \ref{key}. If $n=1$, we have \begin{equation*}
\mu_{t{\mathcal F}}(C_i)\leq N^te^{-P(t{\mathcal F})}\sup\{f_1^{t}(x):x\in C_i\} \text{ for all } t\geq 1. \end{equation*} Now, let $m$ be any $\sigma$-invariant Borel probability measure for which the limit $$I=\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n dm$$ is finite. Then \begin{align*} P(t{\mathcal F})-tI=&\sup\left\{h_{\mu}(\sigma) + t\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu : \mu \in {\mathcal M}\right\}-tI\\ &= P(t{\mathcal F}-tI)\geq h_m(\sigma)\geq 0. \end{align*} Therefore, for $t\geq 1$, \begin{align}\label{e5} \mu_{t{\mathcal F}}(C_i)&\leq N^te^{-P(t({\mathcal F}-I))}e^{-tI} (\sup\{f_1(x):x\in C_i\})^t \\ &\leq N^te^{-tI}(\sup\{f_1(x):x\in C_i\})^t\\ &=\left (Ne^{-I}\right)^t \left(\sup\{f_1(x):x\in C_i\}\right)^t \end{align} Note that Definition \ref{property} (\ref{a1}) implies that, given $\epsilon>0$, we can find $J \in {\mathbb N}$ such that \begin{equation}\label{e6} \sum_{i>J}\sup\{f_1(x):x\in C_i\}<\frac{\epsilon}{Ne^{-I}}\frac{1}{2^k}. \end{equation} Now we show equation (\ref{e1}). Using (\ref{e5}) and (\ref{e6}), we obtain \begin{align*} \sum_{i>J}\mu_{t{\mathcal F}}(C_i)&\leq \left (Ne^{-I}\right)^t\sum_{i>J}(\sup\{f_1(x):x\in C_i\})^t\\ &=\left(\frac{\epsilon}{2^k}\right)^t\leq \frac{\epsilon}{2^k}. \end{align*} Thus we obtain (\ref{e1}). \end{proof}
\begin{rem} Lemma \ref{L1} implies that the family of Gibbs equilibrium states $\{\mu_{t{\mathcal F}}\}_{t\geq 1}$ has a subsequence that converges weakly to a $\sigma$-invariant Borel probability measure $\mu$. \end{rem}
We now state and prove our main result.
\begin{teo}\label{main4} Let $(\Sigma, \sigma)$ be a countable Markov shift with the BIP property and let ${\mathcal F} \in \mathcal{R}$. Denote by $\mu \in {\mathcal M}$ any accumulation point of the sequence of Gibbs equilibrium states $\{\mu_{t{\mathcal F}}\}_{t\geq 1}$. Then \begin{equation} \label{the:eq} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu= \lim_{t\rightarrow \infty}\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}}, \end{equation} and $\mu$ is a maximising measure for ${\mathcal F}$. \end{teo}
\begin{proof} We will divide the proof of the Theorem in several Lemmas and Remarks.
\begin{rem} Recall that the pressure function $t \mapsto P(t{\mathcal F})$ is convex (see \cite[Corollary 3.2]{iy}) and finite for $t \geq 1$, therefore it is differentiable for every $t > 1$ except, maybe, for a countable set. \end{rem}
\begin{lema}[Derivative of the pressure] \label{lema:der_pres} If the function $P(t {\mathcal F})$ is differentiable at $t=t_0$, $t_0>1$, then
\[ \frac{d P(t{\mathcal F})}{dt} \Big|_{t=t_0} = \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}.\] \end{lema}
\begin{proof}[Proof of Lemma \ref{lema:der_pres}] The proof of this result is fairly standard, see for example \cite[Theorem 1.2]{Fe}. Let $\epsilon >0$. then \begin{eqnarray*} P(t_0{\mathcal F})= h(\mu_{t_0 {\mathcal F}}) + t_0\lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}} \quad \text{ and } \\
P((t_0+ \epsilon) {\mathcal F}) \geq h(\mu_{t_0 {\mathcal F}}) + t_0 \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}} + \epsilon \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{eqnarray*} Thus, \begin{equation*} \frac{P((t_0+ \epsilon) {\mathcal F}) -P(t_0{\mathcal F}) }{\epsilon} \geq \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{equation*} Recalling that one-sided derivatives of the pressure do exist, we have \begin{equation*} P'_+(t_0 {\mathcal F}):=\lim_{\epsilon \to 0^+}\frac{P((t_0+ \epsilon) {\mathcal F}) -P(t_0{\mathcal F}) }{\epsilon} \geq \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{equation*} On the other hand, let $\epsilon <0$, then \begin{equation*} \frac{P((t_0+ \epsilon) {\mathcal F}) -P(t_0{\mathcal F}) }{\epsilon} \leq \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{equation*} Thus, \begin{equation*} P'_-(t_0 {\mathcal F}):=\lim_{\epsilon \to 0^-}\frac{P((t_0+ \epsilon) {\mathcal F}) -P(t_0{\mathcal F}) }{\epsilon} \leq \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{equation*} Hence \begin{equation*} P'_-(t_0 {\mathcal F})\leq \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}} \leq P'_+(t_0 {\mathcal F}). \end{equation*} Since we are assuming that $P(t {\mathcal F})$ is differentiable at $t=t_0$, $t_0>1$, we obtain the desired result, \begin{eqnarray*} P'_+(t_0 {\mathcal F})=P'_-(t_0 {\mathcal F})= \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{eqnarray*} \end{proof}
\begin{rem}[Uniform upper bound on the derivative] \label{rem:ub} If $\{t_k\}_{k=1}^{\infty}$ is an increasing sequence of positive real numbers such that $t_k \to \infty$ and for which the pressure function $P(t {\mathcal F})$ is differentiable at $t=t_k$, $t_k>1$, for every $k \in {\mathbb N}$, then
\[ \lim_{k \to \infty} \frac{d P(t{\mathcal F})}{dt} \Big|_{t=t_k} \leq \sum_{n=1}^{\infty} \sup \log f_1 |_{C_n} < \infty.\] Recall that the derivative of the pressure is an increasing function (being convex), hence the left hand side limit exists. The bound above follows from the sub-additivity of the family ${\mathcal F}$ together with Definition \ref{property} (1). \end{rem}
\begin{lema}[Asymptotic derivative] \label{rem:ad} The following limits exists and \[\lim_{t \to \infty} \frac{P(t{\mathcal F})}{t}=\lim_{t \to \infty} P'_+(t {\mathcal F}) =\lim_{t \to \infty} P'_-(t {\mathcal F}).\] \end{lema}
\begin{proof} Note that the functions $P'_+(t {\mathcal F})$ and $P'_-(t {\mathcal F})$ are increasing (since the pressure is convex) and bounded above (because of Remark \ref{rem:ub}). Therefore both limits exist. Moreover, if $t_1 <t_2$ we have that $P'_+(t_1 {\mathcal F}) \leq P'_-(t_2 {\mathcal F})$, thus both limits coincide. The fact that the limit coincide with that of the asymptotic derivative follows from the following convexity inequality \begin{equation*} P'_+(t_1 {\mathcal F}) \leq \frac{P(t_1 {\mathcal F}) - P(t_2 {\mathcal F}) }{t_1 - t_2} \leq P'_-(t_2 {\mathcal F}). \end{equation*} \end{proof}
Recall that, \begin{equation*} \alpha({\mathcal F}):= \sup \left\{ \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\nu : \nu \in {\mathcal M} \right\}. \end{equation*}
\begin{lema}[The asymptotic derivative bounds the optimal value] \label{rem:adbov} For every $\mu \in {\mathcal M}$ we have that \begin{equation} \label{eq:non} \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu \leq \lim_{t \to \infty} \frac{P(t{\mathcal F})}{t}. \end{equation} \end{lema} \begin{proof} Indeed, assume by way of contradiction that there exists $\nu \in {\mathcal M}$ for which equation \eqref{eq:non} does not hold. Then, there exists $t_0 >1$ such that \begin{equation*} \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \nu > \frac{P(t_0{\mathcal F})}{t_0}. \end{equation*} Note that since $-\infty < \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \nu < \infty$ and $P(t{\mathcal F}) <\infty$, we can conclude that $h(\nu) <\infty$. Thus,
\begin{equation*}
h(\nu) + t_0 \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \nu > P(t_0{\mathcal F}).
\end{equation*} This contradicts the variational principle. Therefore \begin{equation*} \alpha({\mathcal F}) \leq \lim_{t \to \infty} \frac{P(t{\mathcal F})}{t} = \lim_{t \to \infty} P'_+(t {\mathcal F})=\lim_{t \to \infty} P'_-(t {\mathcal F}). \end{equation*} \end{proof}
\begin{rem}[The entropy is decreasing] \label{rem:ed} Note now that for every $t >1$ we have $h(\mu_{{\mathcal F}}) \geq h(\mu_{t{\mathcal F}})$. This is consequence of the convexity of the pressure (see \cite[Corollary 3.2]{iy}). Indeed, assume by way of contradiction that there exists $t_0 >1$ for which $h(\mu_{{\mathcal F}}) < h( \mu_{t_0{\mathcal F}})$. Consider the following straight lines: \begin{eqnarray*} l_1(t):= h(\mu_{{\mathcal F}}) + t \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{{\mathcal F}} \text{ and } &\\ l_{t_0}(t):= h(\mu_{t_0{\mathcal F}}) + t \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t_0{\mathcal F}}. \end{eqnarray*} We first note that $l_1(t)$ and $l_{t_0}(t)$ are finite for $t\geq 0$ because ${\mathcal F}\in\mathcal{R}$. Since $\mu_{{\mathcal F}}$ is the unique equilibrium measure for $ {\mathcal F}$ and $\mu_{t_0}$ is the unique equilibrium measure for $t_0 {\mathcal F}$, we have that \begin{eqnarray*} l_{t_0}(0) > l_1(0) \quad , \quad l_{t_0}(1) < l_1(1) \quad \text{ and } \quad l_{t_0}(t_0) > l_1(t_0). \end{eqnarray*} But that can not happen since two different straight lines can only intersect in one point and the above means that they intersect in at least two points. \end{rem}
We finally prove the Theorem.
Let $\epsilon >0$, since the function $P'_+(t {\mathcal F})$ is non-decreasing and $\alpha({\mathcal F}) \leq \lim_{t \to \infty} P'_+(t {\mathcal F})$ there exists $t_0 >1$ such that \begin{equation*} \alpha({\mathcal F}) - \epsilon \leq \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}}. \end{equation*} Therefore,
\begin{eqnarray*} \alpha({\mathcal F})-\epsilon \leq \frac{h(\mu_{t_0 {\mathcal F}})}{t} + \lim_{n \to \infty} \frac{1}{n} \int \log f_n d \mu_{t_0 {\mathcal F}} \leq \frac{P(t {\mathcal F})}{t} =&\\ \frac{h(\mu_{t{\mathcal F}})}{t} + \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}} \leq \frac{h(\mu_{{\mathcal F}})}{t} + \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}} \leq &\\
\frac{h(\mu_{{\mathcal F}})}{t} +\alpha({\mathcal F}).
\end{eqnarray*} Thus, for any $\epsilon >0$ we have that \begin{equation*} \alpha({\mathcal F})-\epsilon \leq \limsup_{t \to \infty} \frac{P(t {\mathcal F})}{t} \leq \limsup_{t \to \infty} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}} \leq \alpha({\mathcal F}). \end{equation*} Similary, we have that \begin{equation*} \alpha({\mathcal F})-\epsilon \leq \liminf_{t \to \infty} \frac{P(t {\mathcal F})}{t} \leq \liminf_{t \to \infty} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}} \leq \alpha({\mathcal F}). \end{equation*} Therefore, letting $\epsilon\rightarrow 0$, \begin{equation*} \liminf_{t \to \infty} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}} =\limsup_{t \to \infty} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}}. \end{equation*} It follows from Lemma \ref{lema1} that, \begin{equation*} \lim_{t \to \infty} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}} =\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu=\alpha({\mathcal F}). \end{equation*} Hence we obtain (\ref{the:eq}) and $\mu$ is an ${\mathcal F}-$maximising measure.
\end{proof}
We stress that the result obtained in Theorem \ref{main4} is new even in the compact setting, where convergence of Gibbs states directly follows as a consequence of the fact that the space of invariant measures is compact. Also note that in this compact setting, since the system has finite topological entropy, every Gibbs measure is an equilibrium measure.
\begin{coro} \label{coro:compact} Let $(\Sigma, \sigma)$ be a transitive sub-shift of finite type defined over a finite alphabet and ${\mathcal F}$ be an almost-additive Bowen sequence on $\Sigma$. Denote by $\mu \in {\mathcal M}$ any accumulation point of the sequence of Gibbs equilibrium states $\{\mu_{t{\mathcal F}}\}_{t\geq 1}$. Then \begin{equation*} \lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu= \lim_{t\rightarrow \infty}\lim_{n\rightarrow \infty}\frac{1}{n}\int \log f_n d\mu_{t{\mathcal F}}, \end{equation*} and $\mu$ is a maximising measure for ${\mathcal F}$. \end{coro} \begin{proof} Since $(\Sigma, \sigma)$ is a sub-shift of finite type over a finite alphabet, it is clear that ${\mathcal F}$ satisfies (\ref{a1}) and (\ref{a2}) of Definition \ref{property}. Now we apply Theorem \ref{main4}.
\end{proof}
\section{The Joint spectral radius} \label{sec:joint} In this section we will show that the techniques developed in this paper have interesting applications in functional analysis. Even in the compact (finite alphabet) setting, Theorem \ref{main4} can be used to obtain results in spectral theory. We begin recalling some basic definitions. Let $A$ be a $d \times d$ real matrix, the \emph{spectral radius} of $A$, is defined by
\[ \rho(A)= \max \{ |\lambda|: \lambda \text{ is an eigenvalue of } A \}.\]
It is well known that if $\| \cdot \|$ is any sub-multiplicative matrix norm the following relation (sometimes called Gelfand property) holds
\[ \rho(A)= \lim_{n \to \infty} \| A \|^{1/n}.\]
Let ${\mathcal A}:=\{A_1, A_2, \dots ,A_k\}$ be a a set of $d \times d$ real matrices and $\| \cdot \|$ a sub-multiplicative matrix norm. The \emph{joint spectral radius} $\varrho({\mathcal A})$ is defined by \begin{equation*}
\varrho({\mathcal A}):=\lim_{n \to \infty} \max \left\{ \|A_{i_n} \cdots A_{i_1} \|^{1/n} : i_j \in \{1, 2, \dots, k\} \right\}. \end{equation*} This notion, that generalises the notion of spectral radius to a set of matrices, was introduced by G.-C. Rota and W.G. Strang in 1960 \cite{rs}. Interest on it was strongly renewed by its applications in the study of wavelets discovered by Daubechies and Lagarias \cite{dl1,dl2}. The value of $\varrho({\mathcal A})$ is independent of the choice of the norm since all of them are equivalent. There exists a wide range of applications of the joint spectral radius in different topics including not only wavelets \cite{p}, but for example, combinatorics \cite{d}. Lagarias and Wang \cite{lw} conjectured that for any finite set of matrices ${\mathcal A}$ there exists integers $\{i_1, \dots, i_n\}$ such that the periodic product $A_{i_1} \cdots A_{i_n}$ satisfies \begin{equation*} \varrho({\mathcal A})= \rho (A_{i_n} \cdots A_{i_1})^{1/n}. \end{equation*} This conjecture was proven to be false by Bousch and Mariesse \cite{bm} and explicit counterexamples were first obtained by Hare, Morris, Sidorov and Theys \cite{hm}.
It is possible to restate the definition of joint spectral radius in terms of dynamical systems. Indeed, let $(\Sigma_k, \sigma)$ be the full-shift on $k$ symbols and consider the family of maps, $\phi_n:\Sigma_k \to {\mathbb R}$ defined by
\[\phi_n(x):= \|A_{i_n} \cdots A_{i_1} \|. \] The family ${\mathcal F}:= \left\{ \log \phi_n \right\}_{n=1}^{\infty}$ is sub-additive on $\Sigma_k$. Denote by ${\mathcal M}$ the set of $\sigma-$invariant probability measures. If $\nu \in {\mathcal M}$ is ergodic then Kingman's sub-additive theorem implies that $\nu$-almost everywhere the following equality holds \begin{equation*} \lim_{n \to \infty} \frac{1}{n} \int \log \phi_n \ d \nu = \lim_{n \to \infty} \frac{1}{n} \log \phi_n(x). \end{equation*}
In this compact setting there exists an invariant measure realising the joint spectral radius. It seems that this was first observed by Schreiber \cite{sc} and later rediscovered by Sturman and Stark \cite{ss}. The most general version of this result has been obtained by Morris \cite[Appendix A]{mo}. This later result has the advantage that it allows the potentials $\phi_n$ to have a range in the interval $[-\infty,+\infty)$.
\begin{lema}[Schreiber's Theorem] \label{medida-max} Let ${\mathcal A}:=\{A_1, A_2, \dots ,A_k\}$ be a a set of $d \times d$ real matrices then there exits a measure $\mu \in {\mathcal M}$ such that \begin{equation*} \varrho({\mathcal A})= \exp\left( \sup \left\{ \lim_{n \to \infty} \frac{1}{n} \int \log \phi_n \ d \nu : \nu \in {\mathcal M}\right\}\right) = \exp \left( \lim_{n \to \infty} \frac{1}{n} \int \log \phi_n d \mu \right). \end{equation*} \end{lema} We refer to \cite[Appendix A]{mo} for a proof of Lemma \ref{medida-max}. It is worth stressing that the definition of joint spectral radius and Lemma \ref{medida-max} hold in a broader context. Indeed, we can consider $(\Sigma, \sigma)$ to be any mixing sub-shift of finite type defined on a finite alphabet and define the corresponding joint spectral radius by
\begin{equation*}
\varrho_{\Sigma}({\mathcal A}):=\exp \left( \lim_{n \to \infty} \max \left\{ \|A_{i_n} \cdots A_{i_1} \|^{1/n} : (i_1 i_2 \dots i_n) \text{ is an admissible word } \right\} \right). \end{equation*}
Under certain cone conditions for the set ${\mathcal A}$, we will show that there exists a one parameter family of dynamically relevant (Gibbs states) invariant measures $\{\mu_t\}_{t\geq 1}$ such that any weak star accumulation point of it $\mu$, satisfies \begin{equation} \varrho({\mathcal A})=\exp \left( \lim_{n \to \infty} \frac{1}{n} \int \log \phi_n \ d \mu \right). \end{equation}
\begin{teo}[Compact case] Let ${\mathcal A}:=\{A_1, A_2, \dots ,A_k\}$ be a a set of $d \times d$ and $(\Sigma, \sigma)$ mixing sub-shift of finite type defined on the alphabet $\{1,2 \dots, k\}$. Let $\phi_n :\Sigma_k \to {\mathbb R}$ be defined by
\[\phi_n(w)= \phi_n((i_1, i_2, \dots))= \|A_{i_n} \cdots A_{i_1} \|. \]
If the family $\mathcal{F}= \{\log \phi_n \}_{n=1}^{\infty}$ is almost-additive then for every $t \geq 1$ there exists a unique Gibbs state $\mu_t$ corresponding to $t{\mathcal F}$ and there exists a weak star accumulation point $\mu$ for $\{ \mu_t \}_{t\geq 1}$. The measure $\mu$ is such that \begin{equation*}
\varrho(\mathcal{A})= \exp\left( \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu \right)= \exp \left( \lim_{t \to \infty} \left( \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu_t \right) \right). \end{equation*} \end{teo}
\begin{proof} Since the family ${\mathcal F}$ is almost-additive and is a Bowen sequence then it is a consequence of Theorem \ref{main2} that there exists a unique Gibbs state $\mu_t$ corresponding to $t {\mathcal F}$ for every $t \geq 1$ which is also an equilibrium measure. Since the space ${\mathcal M}$ is compact there exists a weak star accumulation point $\mu$ for the sequence $\{\mu_t\}_{t\geq 1}$. The result now follows from Theorem \ref{main4} or Corollary \ref{coro:compact}. \end{proof}
\begin{coro} \label{con} Let $\mathcal{B}=\left\{A_1, A_2 , \dots, A_n \right\}$ be a finite set of
positive matrices then the family $\mathcal{F}= \{\log \phi_n \}_{n=1}^{\infty}$ is almost-additive and therefore the sequence of Gibbs measures $\{\mu_t\}_{t\geq 1}$ for $t {\mathcal F}$ has an accumulation point $\mu \in {\mathcal M}$ and \begin{equation*}
\varrho(\mathcal{A})= \exp\left( \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu \right)= \exp \left( \lim_{t \to \infty} \left( \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu_t \right) \right). \end{equation*} \end{coro}
\begin{proof} It was proved in \cite[Lemma 2.1]{Fe1} that the set $\mathcal{B}$ is almost-additive. The result follows from Theorem \ref{teo:joint}. \end{proof}
\begin{rem} \label{rem:pablo} It is well known (see, for example, \cite[Section 2]{fs} or \cite[Section 3.3]{pablo} for precise statements and proofs) that if a finite set of matrices satisfies a \emph{cone condition} then the associated family of potentials $\mathcal{F}= \{\log \phi_n \}_{n=1}^{\infty}$ is almost-additive. The particular case of positive matrices is in itself interesting, but the technical property required to develop the theory is that it satisfies the so called cone condition. Moreover, it was recently shown by Feng and Shmerkin \cite[Section 3]{fs} that the sub-additive pressure for matrix cocycles is continuous on the set of matrices. The main technical idea of the proof is the construction of a subsystem (after iteration) which is almost-additive and that approximates the pressure. This indicates that the almost-additive theory of thermodynamic formalism can be used to approximate the corresponding sub-additive one. \end{rem}
Let us consider now the non-compact case. Let ${\mathcal A}:=\{A_1, A_2, \dots \}$ be a countable set of $d \times d$ real matrices. We can again consider the joint spectral radius of them. However, the conclusion of Lemma \ref{medida-max} might not be true. Even for the case of one potential $\psi:\Sigma \to {\mathbb R}$, there are examples of non-compact dynamical systems for which \[ \sup\left\{ \int \psi d \mu : \mu \in {\mathcal M} \right\} < \limsup_{n \to \infty} \frac{1}{n} \sup \left\{\sum_{i=0}^{n-1} \psi(\sigma^ix) : x \in \Sigma \right\}. \] See for instance \cite[Example 4]{jmu2}. So we consider a slightly different situation.
\begin{teo} \label{teo:joint} Let $\mathcal{A}=\left\{A_1, A_2 , \dots \right\}$ be a countable set of matrices and $(\Sigma, \sigma)$ a topologically mixing countable Markov shift satisfying the BIP property. Let
\[\phi_n(w)= \|A_{i_n} \cdots A_{i_1} \|. \] If the family $\mathcal{F}= \{\log \phi_n \}_{n=1}^{\infty}$ is almost-additive then for every $t \geq 1$ there exists a unique Gibbs measure $\mu_t$ corresponding to $t{\mathcal F}$ and there exists a weak star accumulation point $\mu$ for $\{ \mu_t \}_{t\geq 1}$. The measure $\mu$ is such that \begin{eqnarray*} \sup \left\{ \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \nu : \nu \in {\mathcal M} \right\} = \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu &=&\\ \lim_{t \to \infty} \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu_t. \end{eqnarray*} \end{teo} The proof of this results follows from the zero temperature limit theorems obtained in the previous sections.
\begin{proof} Since the family ${\mathcal F}$ is almost-additive and is a Bowen sequence then it is a consequence of Theorem \ref{main2} that there exists a unique Gibbs state $\mu_t$ corresponding to $t {\mathcal F}$ for every $t \geq 1$ such that $P(t {\mathcal F}) < \infty$. Lemma \ref{L1} implies that there exists a weak star accumulation point $\mu$ for the sequence $\{\mu_t\}_{t\geq 1}$. The result now follows from Theorem \ref{main4}. \end{proof}
\begin{coro} Let $\mathcal{A}=\left\{A_1, A_2 , \dots \right\}$ be a sequence of matrices having strictly positive entries and such that there exists a constant $C >0$ with the property that for every $ k \in {\mathbb N}$ the following holds \begin{equation*}
\frac{\min_{i,j}(A_k)_{i,j}}{\max_{i,j}(A_k)_{i,j}} \geq C \end{equation*} then for every sufficiently large $t\in {\mathbb R}$ there exists a Gibbs state $\mu_t$ for $t {\mathcal F}$ and the sequence $\{\mu_t\}_{t\geq 1}$ has an accumulation point $\mu \in {\mathcal M}$. Moreover, \begin{eqnarray*} \sup \left\{ \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \nu : \nu \in {\mathcal M} \right\} = \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu &=& \\ \lim_{t \to \infty} \lim_{n \to \infty} \frac{1}{n} \int \log \phi_ n d \mu_t. \end{eqnarray*} \end{coro}
\begin{proof} Under the assumptions of the theorem the family $\mathcal{F}= \{\log \phi_n \}_{n=1}^{\infty}$ is almost-additive (see \cite[Lemma 7.1]{iy}) on $\Sigma$ and therefore the results directly follows from Theorem \ref{teo:joint}. \end{proof}
Let us stress that the space of invariant measures is not compact, so the existence of such an invariant measure is non trivial.
\section{Maximising the singular value function} \label{sec:singular} Ever since the pioneering work of Bowen \cite{bow2} the relation between thermodynamic formalism and the dimension theory of dynamical systems has been thoroughly studied and exploited (see for example \cite{ba,b3,pe}). Multifractal analysis is a sub-area of dimension theory where the results obtained out of this relation has been particularly successful. The main goal in multifractal analysis is to study the complexity of level sets of invariant local quantities. Examples of these quantities are Birkhoff averages, Lyapunov exponents, local entropies and pointwise dimension. In general the structure of these level sets is very complicated so tools such as Hausdorff dimension or topological entropy are used to describe them. In dimension two (or higher) where a typical dynamical system is non-conformal computing the exact value of the Hausdorff dimension of the level sets is an extremely complicated task and at this point there are no techniques available to deal with such problem.
In this section we show how the results obtained in Section \ref{zero} can be used in the study of multifractal analysis of Lyapunov exponents for certain non-conformal repellers. Indeed, we will combine our results on ergodic optimisation with those of Barreira and Gelfert \cite{bg} (which in turn uses ideas of Falconer \cite{f}) to construct a measure supported on the extreme level sets.
Let $f : {\mathbb R}^2 \mapsto {\mathbb R}^2$ be a $C^1$ map and let $\Lambda \subset {\mathbb R}^2$ be a repeller of $f$. That is, the set $\Lambda$ is compact, $f$-invariant, and the map $f$ is expanding on $\Lambda$, i.e., there exist $c > 0$ and $\beta > 1$ such that
\[ \|d_xf^n(v) \| \geq c\beta^n \|v\|, \] for every $x \in \Lambda$, $n \in {\mathbb N}$ and $v \in T_x {\mathbb R}^2$. We will also assume that there exists an open set $U \subset {\mathbb R}^2$ such that $\Lambda \subset U$ and $\Lambda = \cap_{n \in {\mathbb N}} f^n(U)$ and that $f$ restricted to $\Lambda$ is topologically mixing. A pair $(\Lambda, f)$ satisfying the above assumptions will be called \emph{expanding repeller}. All the above assumptions are standard and there is a large literature describing the dynamics of expanding maps (see for example \cite{bdv}). It is important to stress that the system $(\Lambda,f)$ can be coded with a finite state transitive Markov shift. For each $x \in {\mathbb R}^2$ and $ v \in T_xR^2$ we define the \emph{Lyapunov exponent} of $(x,v)$ by \begin{equation*}
\lambda(x,v):=\limsup_{n \to \infty} \frac{1}{n} \log \| d_xf^n v \|. \end{equation*} For each $x \in {\mathbb R}^2$ there exists a positive integer $s(x) \leq 2$, numbers $\lambda_1(x) \geq \lambda_2(x)$, and linear subspaces \[ \{0\} =E_{s(x)+1}(x) \subset E_{s(x)}(x) \subset E_{1}(x)=T_xR^2,\] such that \[E_i(x)=\left\{v \in T_x{\mathbb R}^2 : \lambda(x,v)=\lambda_i(x) \right\}\] and $ \lambda(x,v)=\lambda_i(x)$ if $v \in E_i(x) \setminus E_{i+1}(x)$.
In order to study study multifractal analysis of Lyapunov exponents in this context, Barreira and Gelfert \cite{bg} used a construction originally made by Falconer \cite{f} that we know recall. The singular values $s_1(A), s_2(A)$ of a $2\times 2$ matrix $A$ are the eigenvalues, counted with multiplicities, of the matrix $(A^*A)^{1/2}$, where $A^*$ denotes the transpose of $A$. The singular values can be interpreted as the length of the semi-axes of the ellipse which is the image of the unit ball under $A$. The functions, $\phi_{i,n}: \Lambda \to {\mathbb R}$ be defined by
\[ \phi_{i,n}(x)= \log s_i(d_xf^n) \] and called \emph{singular value functions}. Falconer \cite{f} studied them with the purpose of estimating the Hausdorff dimension of $\Lambda$ and have become one of the major tools in the dimension theory for non-conformal systems. It directly follows from Oseledets' multiplicative ergodic theorem \cite[Chapter 3]{bp} that for each finite $f-$invariant measure $\mu$ there exists a set $X \subset {\mathbb R}^2$ of full $\mu$ measure such that \begin{equation} \label{eq:lya}
\lim_{n \to \infty} \frac{\phi_{i,n}(x)}{n}= \lim_{n \to \infty} \frac{1}{n} \log s_i(d_xf^n)= \lambda_i(x).
\end{equation} Given $\alpha=(\alpha_1, \alpha_2) \in {\mathbb R}^2$ define the corresponding level set by \[L(\alpha):= \left\{ x \in \Lambda : \lambda_1(x)= \alpha_1 \text{ and } \lambda_2(x)= \alpha_2 \right\}. \]
Barreira and Gelfert \cite{bg} described the entropy spectrum of the Lyapunov exponents of $f$, that is they studied the function $\alpha \to h_{top}(f | L(\alpha))$, where $h_{top}$ denotes the topological entropy of the set $L(\alpha)$. Their study exploited the relation established in equation \eqref{eq:lya}, where it is shown that the level sets for the Lyapunov exponent correspond to level sets of the ergodic averages of the sub-additive sequences defined by $S_1=\{\phi_{1,n}\}_{n=1}^{\infty}$ and $S_2= \{\phi_{2,n}\}_{n=1}^{\infty}$.
The following result, which is a direct consequence of the theorems obtained in Section \ref{zero}, allows us to describe the maximal Lyapunov exponent of the map $f$.
\begin{prop} Let $(\Lambda, f)$ be an expanding repeller such that the singular value function are almost-additive then for every $t \geq 1$ there exists a unique Gibbs measure $\mu_t$
corresponding to $tS_1$. Moreover, the sequence $\{\mu_t\}_{t\geq 1}$ has an accumulation point $\mu$ and \begin{equation*} \sup \left\{ \lim_{n \to \infty } \frac{1}{n} \phi_{1,n}(x) \right\} = \lim_{n \to \infty} \frac{1}{n} \int \phi_{1,n}(x) d \mu. \end{equation*}
\end{prop}
In particular, we obtain invariant measure supported on the set of points for which the Lyapunov exponent is maximised.
Conditions on the dynamical system $f$ so that the sequences $S_1$ and $S_2$ are almost-additive can be found, for example, in \cite[Proposition 4]{bg} where it is proved that
\begin{lema} Let $(\Lambda, f)$ be an expanding repeller. If \begin{enumerate} \item for every $x \in \Lambda$ the derivative $d_xf$is represented by a positive $2 \times 2$ matrix, or \item if $\Lambda$ posses a dominated splitting (see \cite[p.234]{b3} or \cite{bdv} for a precise definition). \end{enumerate} Then the sequences $S_1$ and $S_2$ are almost-additive. \end{lema} Actually, a cone condition of the type discussed in Corollary \ref{con} is enough to obtain almost-additivity. This is discussed also in \cite{bg}. Again, as in Remark \ref{rem:pablo}, it was shown by Feng and Shmerkin \cite{fs} that the singular value function can be approximated by almost-additive ones.
We have considered maps defined in ${\mathbb R}^2$, similar results can be obtained in any finite dimension. Let us stress that we have only used a compact version of the results of Section \ref{zero}, namely Corollary \ref{coro:compact}, which hold true in the countable (non-compact) setting.
\end{document} | arXiv |
\begin{document}
\title{Weyl modules associated to Kac-Moody Lie algebras} \begin{abstract}
Weyl modules were originally defined for affine Lie algebras by Chari and Pressley in \cite{CP}. In this paper we extend the notion of Weyl modules for a Lie algebra ${\mathfrak{g}} \otimes A$, where ${\mathfrak{g}}$ is any Kac-Moody algebra and A is any finitely generated commutative associative algebra with unit over ${\mathbb C}$, and prove a tensor product decomposition theorem generalizing \cite{CP}. \end{abstract}
\section{Introduction} Let ${\mathfrak{g}} $ be a Kac-Moody Lie algebra and let ${\mathfrak{h}}$ be a Cartan subalgebra of ${\mathfrak{g}}$. Set ${\mathfrak{g}}' = [{\mathfrak{g}},{\mathfrak{g}}]$ and ${\mathfrak{h}}' ={\mathfrak{g}} \cap {\mathfrak{h}}$. Let ${\mathfrak{h}}''$ be a vector subspace of ${\mathfrak{h}}$ such that ${\mathfrak{h}}' \oplus {\mathfrak{h}}''= {\mathfrak{h}}$. Let $A$ be a finitely generated commutative associative algebra with unit over ${\mathbb C}$. Denote $\stackrel{\sim}{{\mathfrak{g}}} \,= {\mathfrak{g}}' \otimes A \oplus {\mathfrak{h}}''$ and let
${\mathfrak{g}} = N^{-} \oplus {\mathfrak{h}} \oplus N^{+}$ be a standard triangular decomposition into positive and negative root subspaces and a Cartan subalgebra.
Let $\stackrel{\sim}{N}^- = N^{-} \otimes A, \, \stackrel{\sim}{N}^+ = {N}^+ \otimes A$ and $\stackrel{\sim}{{\mathfrak{h}}} = {\mathfrak{h}}' \otimes A\oplus {\mathfrak{h}}''$. Consider a linear map $\psi : \stackrel{\sim}{{\mathfrak{h}}} \rightarrow {\mathbb C}$.
In \cite{CP} Chari and Pressley defined the Weyl modules for the loop algebras, which are nothing but the maximal integrable highest weight modules. Feigin and Loktev \cite{FL} generalized the notion of Weyl module by replacing Laurent polynomial ring by any commutative associative algebra with unit and generalized the tensor decomposition theorem of \cite{CP}.
Chari and Thang \cite{CTH} studied Weyl modules for double affine Lie algebra. In \cite{CFK}, a functorial approach used to study Weyl modules associated with the Lie algebra $\fma \otimes A$, where $\fma$ is finite dimensional simple Lie algebra and A is a commutative algebra with unit over ${\mathbb C}$. Using this approach they \cite{CFK} defined a Weyl functor from category commutative associative algebra
modules to simple Lie algebra modules, and studied tensor product properties of this functor. Neher and Savage \cite{ENSA} using generalized evaluation representation discussed more general case by replacing finite dimensional simple algebra with an infinite dimensional Lie algebra.
Let $\tau = \fma \otimes A_n \oplus \Omega_{A_n}/dA_n$ be a toroidal algebra, where $\fma$ is finite dimensional simple Lie algebra and $A_n = {\mathbb C}[t_1^{\pm},\cdots, t_n^{\pm}]$ is a Laurent polynomial ring in commutating $n$ variables(see \cite{E}). It is proved in \cite{ESTL} that any irreducible module with finite dimensional weight spaces of $\tau$ is in fact a module for ${\mathfrak{g}} \otimes A_{n-1}$ where ${\mathfrak{g}}$ is affinization of $\fma$. Thus it is important to study ${\mathfrak{g}} \otimes A_{n-1}$-modules. Rao and Futorny \cite{EV} initiated the study of ${\mathfrak{g}} \otimes A_{n-1}$-modules in their recent work. In our paper we consider the ${\mathfrak{g}} \otimes A$-module, where ${\mathfrak{g}}$ is any Kac-Moody Lie algebra and $A$ is any finitely generated commutative associative algebra with unit over ${\mathbb C}$. Our work is kind of generalization of the tensor product results in \cite{CFK, FL, CP}. For a cofinite ideal $I$ of $A$ we define a module $M(\psi,I)$, and a Weyl module $W(\psi, I)$ of $\stackrel{\sim}{{\mathfrak{g}}}$ (Section \ref{s2}). The main result of the paper is the tensor product decomposition of $W(\psi, I)$, where $I$ is a finite intesection of
maximal ideals.
The paper is organised as follows. We begin with preliminaries by stating some basic facts about Kac-Moody algebras and Weyl modules. In Section \ref{s1}
we define the modules $M(\psi,I)$ over $\stackrel{\sim}{{\mathfrak{g}}}$ and show that they have finite
dimensional weight spaces and prove
tensor decomposition theorem for them. Section \ref{s2} is devoted to the tensor decomposition theorem for the Weyl module $W(\psi,I)$ over $\stackrel{\sim}{{\mathfrak{g}}}$.
\section{ PRELIMINARIES} Let $\fma$ be a finite dimensional simple Lie algebra of rank $r$ with a Cartan subalgebra $\fmh$. Let $ {\overset{\circ}{\Delta}}$ denote a root system of $\fma$ with respect to $\fmh$. Let ${\overset{\circ}{\Delta^+}}$ and ${\overset{\circ}{\Delta^-}}$ be a sets of positive and negative roots of $\fma$ respectively. Denote by $\alpha_1, \cdots , \alpha_r$ and $\alpha_1^{\vee},\cdots,\alpha_r^{\vee}$
a sets of simple roots and simple coroots of $\fma$.
Let $\fma = \np \oplus \fmh \oplus \nm$ be a triangular decomposition of $\fma$. Let $e_i$ and $f_i$ be the Chevalley generators of $\fma$. Let $\overset{\circ}{Q} = \oplus \, {\mathbb Z} \, \alpha_i$ and ${\overset{\circ}{P}} = \{\lambda \in \fmh^{\ast} : \lambda (\alpha_{i}^{\vee})\in {\mathbb Z} \}$ be the root and weight lattice of $\fma$ respectively. Set ${\overset{\circ}{P_+}} = \{\lambda \in \fmh^{\ast} : \lambda(\alpha_{i}^{\vee})\geq 0 \}$, the set of dominant integral weights of $\fma$.
Recall that a $\fma$-module $V$ is said to be integrable if it is $\fmh$-diagonalisable and all the Chevalley generators $e_i$ and $f_i$, $1 \leq i \leq r$, act locally nilpotently on $V$. For commutative associative algebra with unit $A$, consider the Lie algebra algebra $\fma \otimes A$. We recall the definition of local Weyl module for $\fma \otimes A$ \cite{FL,CFK}.
\begin{dfn}\rm Let $\psi : \fmh \otimes A \rightarrow {\mathbb C}$ be a linear map such that $\psi \mid_{ \fmh} = \lambda ,$
$I$ a cofinite ideal of $A$. Then $W(\psi , I)$ is called a local Weyl module for $\fma \otimes A$ if there exists a nonzero $v \in W(\psi, I)$ such that\\ $U(\fma \otimes A)v = W(\psi, I), (\np \otimes A)v = 0, (h \otimes 1)v = \lambda(h)v$\\ $\psi\mid_{\fmh \otimes I} = 0,{ (f_i \otimes 1) ^{\lambda(\alpha_{i}^{\vee})+1}}v = 0$, for $i = 1, \cdots, r .$ \end{dfn}
It is shown in \cite{FL}(Proposition 4) that the local Weyl modules exists and can be obtained as quotient of global Weyl module.
Let $\stackrel{\sim}{{\mathfrak{g}}}\, = \,{\mathfrak{g}}' \otimes A \oplus {\mathfrak{h}}'' $ is a Lie algebra with the following bracket operations: \begin{align*} [X \otimes a, Y\otimes b] &= [X,Y] \otimes ab ,\\ [h, X \otimes a] &= [h,X] \otimes a ,\\ [h,h'] &= 0 , \end{align*} where $X, Y \in {\mathfrak{g}}'$, $h, h' \in {\mathfrak{h}}''$ and $a, b \in A$. Let $\stackrel{\sim}{{\mathfrak{h}}} := {\mathfrak{h}}' \otimes A \oplus {\mathfrak{h}}''$ and $\stackrel{\sim}{{\mathfrak{g}}}\, = \,\stackrel{\sim}{N}^+ \oplus \stackrel{\sim}{{\mathfrak{h}}} \oplus \stackrel{\sim}{N}^-$ be a triangular decomposition of $\stackrel{\sim}{{\mathfrak{g}}}$, where $\stackrel{\sim}{N}^+ = {N}^+ \otimes A$ and $\stackrel{\sim}{N}^- = {N}^- \otimes A$.
Let $\psi : \stackrel{\sim}{{\mathfrak{h}}} \rightarrow {\mathbb C}$ be a linear map. \begin{dfn}\rm
A module $V$ of $\stackrel{\sim}{{\mathfrak{g}}}$ is called highest weight module (of highest weight $\psi$) if $V$ is generated by a highest weight vector ${v}$ such that
\noindent (1) $\stackrel{\sim}{N}^+ v=0.$\\ (2) $h \ {v} =\psi (h) {v}$ for $h \in \ \stackrel{\sim}{{\mathfrak{h}}}, \psi \in \stackrel{\sim}{{\mathfrak{h}}}^{\ast}$. \end{dfn}
Let ${\mathbb C}$ be the one dimensional representation of $\stackrel{\sim}{N}^+ \oplus \stackrel{\sim}{{\mathfrak{h}}}$ where $\stackrel{\sim}{N}^+ $ acts trivially and $\stackrel{\sim}{{\mathfrak{h}}}$ acts via $h.1 = \psi(h) 1$ for $\forall h \in \stackrel{\sim}{{\mathfrak{h}}}$. Define the induced module $$\displaystyle{ M(\psi) = U(\stackrel{\sim}{{\mathfrak{g}}}) \displaystyle{\bigotimes_{U(\stackrel{\sim}{N}^+ \oplus \stackrel{\sim}{{\mathfrak{h}}})}} {\mathbb C}} \,.$$ Then $M(\psi)$ is highest weight module and has a unique irreducible quotient denoted by $V(\psi)$.
\section{The modules $M(\psi, I)$ and its tensor decomposition} \label{s1} Let $\alpha_1, \ldots, \alpha_l$ be a set of simple roots of ${\mathfrak{g}}$ and $\Delta^+$ a set of corresponding positive roots. Let $Q = \displaystyle{\bigoplus_{i= 1}^{l}}{{\mathbb Z} \alpha_i}$ be root lattice of ${\mathfrak{g}}$ and $Q_{+} = \displaystyle{\bigoplus_{i= 1}^{l}}{{\mathbb Z}_{\geq 0} \alpha_i}.$ Let $\lambda \in {\mathfrak{h}}^{\ast} $ be a dominant integral weight of ${\mathfrak{g}}$. Consider $\alpha \in \Delta^+$ and assume $\alpha =\sum n_i \alpha_i$. Define an usual ordering on $\Delta^+$ by $\alpha \leq \beta$ for $\alpha, \beta \in \Delta^+$ if $\beta - \alpha \in Q_{+}$.
Let $I$ be a cofinite ideal of $A$. Let $\{ I_\alpha, \alpha \in \Delta^+\}$ be a sequence of cofinite ideals of $A$ such that $I_{\alpha} \subseteq I$ and \\ (1) $\alpha \leq \beta \Rightarrow I_\beta \subseteq I_\alpha$.\\ (2) $I_\alpha I_\beta \subseteq I_{\alpha+\beta} $ if $\alpha +\beta \in \Delta^+$.
For $\beta \in \Delta^+$ let $X_{-\beta}$ be a root vector corresponding to the root $-\beta$. For a cofinite ideal $I$ of $A$ set $X_{-\beta} I = X_{-\beta} \otimes I$.
Let $\psi :\stackrel{\sim}{{\mathfrak{h}}} \rightarrow {\mathbb C}$ be a linear map such that $\psi\mid_{{\mathfrak{h}}' \otimes I} =0 $, $\psi\mid_{{\mathfrak{h}}}=\lambda \in {\mathfrak{h}}^*$ and $\lambda$ is dominant integral.
\begin{dfn}\rm \label{df1} We will denote by $M(\psi, \{I_\alpha, \alpha \in \Delta^+\})$ the highest weight $\stackrel{\sim}{{\mathfrak{g}}}$-module with highest weight $\psi$ and highest weight vector $v$ such that $(X_{-\beta} I_\beta)v=0$ for all $\beta \in \Delta^+$. \end{dfn}
We will show now the existence of modules $M(\psi, \{I_\alpha, \alpha \in \Delta^+\})$. Let $M(\psi)$ be the Verma module with a highest weight $\psi$ and a highest weight vector $v$. We will prove that the module generated by $X_{-\alpha}I_{\alpha}v$ is a proper submodule of $M(\psi)$ for all $\alpha \in \Delta_{+}$. We use induction on the height of $\alpha$. First recall that there is a cofinite ideal $I$ such that $\psi \mid_{{\mathfrak{h}}^{\prime} \otimes I} = 0$ and by definition $I_{\alpha} \subseteq I$ and $I_{\alpha} \subseteq I_{\beta}$ if $\beta \leq \alpha$.
Let us consider $X_{-\alpha_i} I_{\alpha_{i}}v$ for a simple root $\alpha_i$. We will prove that $X_{-\alpha_i} a v$ generates a proper submodule of $M(\psi)$
where $a \in I_{\alpha_{i}}$.
Indeed we have $X_{\alpha} b X_{-\alpha_i} a v = 0$ for any simple root $\alpha \neq \alpha_i$ and $b \in A$. Let $N_i$ be the ${\mathfrak{g}}$-submodule generated by $X_{-\alpha_i}I_{\alpha_i}v$. Then $X_{\alpha_i}b X_{-\alpha_i} a v = h_{i} ba v = 0$ as $I_{\alpha_{i}} \subseteq I$ and $\psi \mid_{{\mathfrak{h}}^{\prime} \otimes I} = 0$. So $M(\psi)/N_i \neq 0$ and the induction starts.
Let $\beta \in \Delta_+$ and $\mathrm{ht}(\beta) = n$. Let $N $ be the submodule generated by $\sum_{\nu \in \Delta_+}{X_{-\nu}I_{\nu}}v$ where $\mathrm{ht}{\nu} < n$. Then by induction, $N$ is a proper submodule of $M(\psi)$. Now consider $X_{\alpha_{i}}b X_{-\beta} a v$ where $\alpha_i$ is a simple root, $b \in A$ and $a \in I_{\beta}$. But $X_{\alpha_{i}}b X_{-\beta} a v = X_{-\beta + \alpha_i} ba v$. Since $\mathrm{ht}(\beta - \alpha_i) < n$ and $I_{\beta}$ is an ideal of $A$, we have $ba \in I_{\beta} \subseteq I_{\beta - \alpha_i}$. Hence, we see that $X_{-\beta + \alpha_i} ba v \in N$. Therefore,
$ X_{-\beta} a v$ is a highest vector of $M(\psi)/N$, and hence generates a proper submodule.
\begin{lmma} \label{l1}
Let $\gamma_1, \ldots, \gamma_n, \beta \in \Delta^+$. Then $ B= X_{-\beta} I^{n+1}_\beta X_{-\gamma_1} a_1\ldots X_{-\gamma_n} a_n v=0$, for $ a_1, \ldots a_n \in A$ and each $\gamma_i \leq \beta$. \end{lmma}
\begin{proof}
We prove the statement by induction on $n$. For $ n=0$ the lemma follows from the definition of the module. We have $$B=X_{-\gamma_1} a_1 X_{-\beta} I^{n+1}_\beta X_{-\gamma_2} a_2 \ldots X_{-\gamma_n} a_n {v} + [X_{-\beta}, X_{-\gamma_1}] I^{n+1}_\beta X_{-\gamma_2}a_2 \ldots X_{-\gamma_n} a_n {v}\,.$$ The first term is zero by induction on $n$. Repeating the same argument $n$ times for the second term we end up with: \\
$B= [\ldots [X_{-\beta}, X_{-\gamma_{1}}], X_{-\gamma_2}], \ldots, X_{-\gamma_n}]I_\beta^{n+1} {v}$ . Assume $$ [\ldots [X_{-\beta}, X_{-\gamma_{1}}], X_{-\gamma_2}], \ldots, X_{-\gamma_n}]\neq 0.$$ Then it is a nonzero multiple of $ X_{-(\beta +\sum\gamma_i})$ and $ \beta +\sum\gamma_i$ is a root. As each $\gamma_i \leq \beta$ we have $I_\beta \subseteq I_{\gamma_i}$.
Thus $$ I^{n+1}_\beta \subseteq I_\beta I_{\gamma_1} I_{\gamma_2}\ldots I_{\gamma_n} \subseteq I_{\beta +\sum \gamma_i}. $$ Since $$ X_{-(\beta +\sum\gamma_i}) I_{\beta +\sum \gamma_i} v=0, $$ it completes the proof of the lemma. \end{proof}
\begin{ppsn}
$M(\psi, \{I_\alpha, \alpha \in \Delta^+\})$ has finite dimensional weight spaces with respect to ${\mathfrak{h}}$. \end{ppsn} \begin{proof}
Follows from Lemma \ref{l1} . \end{proof}
We now construct a special sequence of cofinite ideals $I_\alpha, \alpha \in \Delta^+$. Let $I$ be any cofinite ideal of $A$. Let $\psi: \stackrel{\sim}{{\mathfrak{h}}} \, \rightarrow {\mathbb C}$ be a linear map such that $\psi \mid_{{\mathfrak{h}}' \otimes I=0}$ and $\psi \mid_{{\mathfrak{h}}} =\gamma$ a dominant integral weight. Let recall that for $\alpha \in \Delta_{+}$ with $\alpha = \sum_{i = 1}^{l} {m_i \alpha_i}$, define $N_{\lambda, \alpha} = \sum_{i = 1}^{l}{m_i \lambda(\alpha_{i}^{\vee})}.$
Let $I_\alpha =I^{N_{\lambda, \alpha}}$. Now if
$\alpha\leq\beta$ then it implies that , $N_{\lambda, \alpha} \leq N_{\lambda, \beta}$
and hence $I_\beta \subseteq I_\alpha$. Suppose $\alpha, \beta \in \Delta^+$ such that $\alpha +\beta \in \Delta^+$. Then clearly $I_\alpha I_\beta =I_{\alpha+\beta}$. For this special sequence of ideals $I_{\alpha}$, define $M(\psi, I):= M(\psi, \{I_\alpha, \alpha \in \Delta^+\})$.
Let $I$ and $J$ be coprime cofinite ideals of $A$. Consider linear maps $\psi_1, \psi_2:\stackrel{\sim}{{\mathfrak{h}}} \mapsto {\mathbb C}$ such that $\psi_1 \mid_{{\mathfrak{h}}'\otimes I} =0, \ \psi_2\mid_{{\mathfrak{h}}'\otimes J} = 0$,
$\psi \mid_{ {\mathfrak{h}}} =\lambda$ and $\psi_2 \mid_{{\mathfrak{h}}}=\mu$. Further assume that $\lambda$ and $\mu$ are dominant integral weights. Let $M(\psi_1, I)$ and $M(\psi_2, J)$ be the corresponding highest weight modules. Now define the following new sequence of cofinite ideals of $A$. Let $K_\alpha =I^{N_{\lambda, \alpha}} \cap J^{N_{\mu, \alpha}} \subseteq I \cap J$.\\ It is easy to check that:\\ (1) If $\alpha\leq \beta, \ \alpha, \beta \in \Delta^+$ then $K_\beta \subseteq K_\alpha$.\\ (2) $K_\alpha \ K_\beta \subseteq K_{\alpha+\beta}$ if $\alpha, \beta, \alpha +\beta \in \Delta^+$.
Let $\psi =\psi_1 +\psi_2$, so that $\psi\mid_{{\mathfrak{h}}' \otimes (I \cap J)} =0, \ \psi\mid_{{\mathfrak{h}}} =\lambda +\mu$. Then we have
\begin{thm} \label{T1} As a $\widetilde{{\mathfrak{g}}}$-module $$ M(\psi_1 +\psi_2, \{K_\alpha, \alpha \in \Delta^+\}) \cong M(\psi, I) \otimes M(\psi_2, J) . $$ \end{thm}
The following is standard but we include the proof for convenience of the reader. \begin{lmma} \label{l2} Let $I$ and $J$ be the coprime cofinite ideals of $A$. Then\\ a) $A=I^n +J^m$, for all $n,m \in {\mathbb Z}_{\geq 1}$.\\ b) $A/(I^n \cap J^m) \cong$ $A/I^{n} \oplus A/J^{m} .$
\end{lmma} \begin{proof}
$a)$ As $I$ and $J$ are coprime there exist $f \in I$ and $g \in J$ such that $f + g = 1$. Considering the expression $(f+g)^{m+n+1} = 1$ , we see that the left hand side is the sum of two elements of $I^{m}$ and $I^{n}$. $b)$ is clear from $a)$ and the Chinese reminder theorem. \end{proof}
Assume $$ \begin{array}{lll} \dim A/I^{N_{\lambda,\alpha}} &=& m_\alpha\\ \dim A/J^{N_{\mu,\alpha}} &=&n_\alpha\\ \dim A/I^{N_{\lambda,\alpha}} \cap J^{N_{\mu,\alpha}} &=&k_\alpha \end{array} $$ then $$ m_\alpha +n_\alpha =k_\alpha $$
by the above lemma.
\noindent {\bf Proof of Theorem \ref{T1}.}
Let $a_{1,\alpha}, \ldots, a_{m_\alpha,\alpha}$ be a ${\mathbb C}$-basis of $A/I^{N_{\lambda,\alpha}}$. Let $a^1_\alpha, a^2_{\alpha}, \ldots,$ be a ${\mathbb C}$-basis of $I^{N_{\lambda,\alpha}}$. Then clearly $\stackrel{\sim}{N}^{-}$ has the following ${\mathbb C}$-basis: $$ \begin{array}{l} \{X_{-\alpha} a_{i,\alpha}, 1 \leq i \leq m_\alpha, \alpha \in \Delta^+\} \cup\\ \{ X_{-\alpha} a^i_\alpha, i \in {\mathbb N}, \ \alpha \in \Delta^+\}. \end{array} $$ Let $U_\lambda, \ U^\lambda$ be the subspaces of $U(\stackrel{\sim}{N}^-)$ spanned by the ordered products of the first set and the second set respectively. Then by the PBW theorem we have $ U(\stackrel{\sim}{N}^-) = U_\lambda U^\lambda. $
Let $$ \begin{array}{lll} M &= &M(\psi_1 +\psi_2, \ \{K_\alpha, \alpha \in \Delta^+\}),\\ M_1 &=& M(\psi_1, I),\\ M_2&=& M(\psi_2, J).\\ \end{array} $$ It is easy to see that $M_1=U_\lambda U^\lambda v =U_\lambda v $ as $U^\lambda v = {\mathbb C} \,v$. Since $M_1$ has finite dimensional weight subspaces we can define the character of $M_1$ as follows: $$ \mathrm{Ch}\,M_1 =\sum_{\eta\in Q^+} \dim M_{1, \ \lambda-\eta}\, e^{-(\lambda-\eta)}. $$
Let $l_\alpha$ denote the multiplicity of the root $\alpha$.\\
It is standard that \\ $\dim M_{1,\lambda-n} =K^1_\eta$, where $K^1_\eta$ is given by $$ \prod_{\alpha \in \Delta^+}(1-e^{-\alpha)^{-m_\alpha l_\alpha}} =\sum_{\eta \in Q^+} K^1_\eta e^{-\eta}. $$ Also we have that $\dim M_{2,\mu-\eta} =K^2_\eta$, where $$ \prod_{\alpha\in \Delta^+} (1-e^{-\alpha})^{-n_\alpha l_\alpha} =\sum_{\eta\in Q^+} K^2_\eta e^{-\eta}, $$ and $\dim M_{\lambda +\mu-\eta} =K_\eta$, where $K_\eta$ is given by $$ \prod_{\alpha \in \Delta^+}(1-e^{-\alpha})^{-k_\alpha \ l_\alpha} =\sum_{\eta\in Q^+} K_\eta e^{-\eta} $$ (recall that $k_\alpha =m_\alpha +n_\alpha$).
From the above calculations we see that $$ \dim \ M_{\lambda+\mu-\eta} =\dim(M_1\otimes M_2)_{\lambda +\mu-\eta}. $$ Thus to prove the theorem it is sufficient to show that there is a surjective $\widetilde{{\mathfrak{g}}}$-homomorphism from $M$ to $M_1 \otimes M_2$.
Let $v_1$ and $v_2$ be the highest weight vectors of $M_1$ and $M_2$ respectively. Let $U$ be $\widetilde{{\mathfrak{g}}}$ submodule of $M_1 \otimes M_2$ generated by $v_1\otimes v_2$. It is easy to check that $(\psi_1 +\psi_2) (h' \otimes (I \cap J))=0$. Recall that $K_\alpha =I^{N_{\lambda,\alpha}} \cap J^{N_{\mu,\alpha}}$. We have $X_{-\alpha} K_\alpha (v_1\otimes v_2)=0$ which immediately implies that $U$ is a quotient of $M$. Hence to complete the proof of the theorem it is sufficient to prove that $ U \simeq M_1 \otimes M_2$.
Clearly, $M_1 \otimes M_2$ is linear span of vectors of the form $w_1 \otimes w_2$ where $$ \begin{array}{l} w_1 = X_{-\lambda_1} a_1\ldots X_{-\lambda_n} a_n v,\\ w_2 =X_{-\beta_1} b_1\ldots X_{-\beta_m}b_m v_2 \,. \end{array} $$ Let $\beta\in \Delta^+$. By the definition of $I_{\alpha}$ and the argument given in the proof of Lemma \ref{l1} it is easy to see that there exists $N >>0$ such that $$ X_{-\beta} I^N w_1=0, \ X_{-\beta} J^N w_2 =0 .\leqno{(a)} $$ Now recall that $A=I^N +J^N$. Let $1= f+g, \ f\in I^N, \ g\in J^N$. For any $h \in A$ write $ h= fh +gh$. We will use induction on $m+n$. Consider $$ X_{-\beta} fh (w_1 \otimes w_2) =w_1 \otimes X_{-\beta} fh \ w_2 \ (by \ (a)) $$ $$ \begin{array}{l} =w_1 \otimes X_{-\beta} (h-gh)w_2\\ =w_1 \otimes X_{-\beta} h \ w_2 \ by \ (by \ (a)). \end{array} $$ As $X_{-\beta} \ fh(w_1\otimes w_2) \in U$ (by induction $w_1 \otimes w_2 \in U$), we conclude that $w_1\otimes X_{-\beta} h \ w_2 \in U$. Similarly we have $X_{-\beta} hw_1\otimes \ w_2 \in U$. It easily follows now that
$U\simeq M_1\otimes M_2$. This completes the proof of the theorem.
\section{Weyl modules for loop Kac-Moody algebras and its tensor decomposition}\label{s2}
In this section we define maximal integrable highest weight modules for $\widetilde{{\mathfrak{g}}}$ and prove a tensor product theorem for them.
Recall that $M(\psi, I)$ is a highest weight module with a highest weight $\psi$ and a highest weight vector $v$. Further $\psi\mid_{{\mathfrak{h}}' \otimes I}=0$ and $\psi\mid_{{\mathfrak{h}}} =\lambda$ a dominant integral weight. Let $\alpha_1,\ldots, \alpha_l$ be simple roots and $\alpha^{\vee}_1, \ldots \alpha^{\vee}_l$ be the simple coroots. Let $\{ X_{\alpha_i}, \alpha^{\vee}_{i}, X_{-\alpha_i}\}$ be an $\mathfrak{sl}_2$ copy corresponding to the simple root $\alpha_i$.
\begin{dfn}\rm \label{dw} Let $W$ be a highest weight $\widetilde{{\mathfrak{g}}}$-module with a highest weight $\psi$ and a highest weight vector $v$ such that \\ (1) $\psi\mid_{{\mathfrak{h}}' \otimes I}=0$,\\ (2) $\psi\mid_{{\mathfrak{h}}} =\lambda$, \\ (3) $X_{-\alpha_i}^{\lambda(\alpha_i^{\vee})+1} v=0$ for $i=1,2, \ldots, l$. \end{dfn}
It follows immediately that $W$ is an integrable ${\mathfrak{g}}$-module (see \cite{K}). We will prove below that such module exists and has finite dimensional weight spaces.
By the result of \cite{FL} (see the proof of Proposition 6 and 16 of \cite{FL}) it follows that $$ X_{-\alpha_i} I^{\lambda(\alpha_i^{\vee})} \, v=0. $$ Let $\alpha \in \Delta^+$ and $X_{-\alpha} =[X_{-\alpha_{i_1}}, [\ldots [X_{-\alpha_{i_{n-1}}}, X_{-\alpha_{i_n}}]]],$ where $\sum \alpha_{i_j} =\alpha$. It is easy to check that $X_{-\alpha} I^{N_{\lambda,\alpha}} v=0$. Recall that $N_{\lambda, \alpha} =\sum n_i \lambda (\alpha_i^{\vee})$ if $\alpha =\sum n_i \alpha_i$.
Thus $W$ is an integrable quotient of $M(\psi, I)$. Denote by $W(\psi, I)$ the maximal such quotient of $M(\psi, I)$ in the sense that any integrable quotient of $M(\psi, I)$ is a quotient of $W(\psi, I)$. In particular, $W(\psi, I)$ has finite dimensional weight spaces. We will prove at the end that for a cofinite ideal $I$ which is finite intersection of maximal ideals, $W(\psi, I)$ is non-zero by explicitly constructing its irreducible quotient. We will call $W(\psi, I)$ the \emph{Weyl module} associated with $\psi$ and $I$. Let $I$ and $J$ be coprime finite ideals of $A$.
\begin{thm} \label{tw}
$W(\psi_1 +\psi_2, \ I \cap J) \cong W(\psi, I) \otimes W(\psi, J)$ as $\widetilde{{\mathfrak{g}}}$-modules. \end{thm}
To prove above theorem we need the following lemma.
\begin{lmma} $W(\psi_1, I) \otimes W(\psi_2, J)$ is a quotient of $W(\psi_1 +\psi_2, \ I \cap J)$. \end{lmma} \begin{proof}
Let $v_1, v_2$ be highest weight vectors of $W(\psi_1, I)$ and $W(\psi_2, J)$ respectively. As in the earlier argument we can prove that $W(\psi_1, I) \otimes W(\psi_2, J)$ is a cyclic module generated by $v_1 \otimes v_2$. Recall that $W(\psi_1 +\psi_2, \ I \cap J)$ is a maximal integrable quotient of $M(\psi_1 + \psi_2, I \cap J)$. But $W(\psi_1, I) \otimes W(\psi_2, J)$ is a integrable quotient of $M(\psi_1+ \psi_2, I \cap J)$. Hence $W(\psi_1, I) \otimes W( \psi_2, J)$ is a quotient of $W(\psi_1 + \psi_2, I \cap J)$. \end{proof}
\emph{Proof of the Theorem \ref{tw}}: In view of the above lemma,
it is sufficient to prove that $W(\psi_1 + \psi_2, I \cap J)$ is a quotient of $W(\psi_1, I) \otimes W(\psi_2, J)$. Let $K_i$ be the kernel of the map $M(\psi_i, I) \to W(\psi_i, I)$, $i=1, 2$. Then it is a standard fact that $\tilde{K} = K_1 \otimes M(\psi_2, J) + M(\psi_1, I) \otimes K_2$ is the kernel of the map $$M(\psi_1, I) \otimes M (\psi_2, J) \to W(\psi_1, I) \otimes W(\psi_2, J).$$ Let $V$ be any integrable quotient of $M(\psi_1, I) \otimes M(\psi_2, J)$ and $K$ be the kernel of the map $M(\psi_1, I) \otimes M(\psi_2, J) \to V$.\\ {\bf{Claim}} : $\tilde{K} \ \subseteq K$. This claim proves that $V$ is a quotient of $W( \psi_1, I) \otimes W(\psi_2, J)$. In particular, $W(\psi_1 + \psi_2, I \cap J)$ is a quotient of $ W(\psi_1, I) \otimes W(\psi_2, J)$ which completes the proof of the theorem.\\ {\bf{Proof of the claim}} : Since $V$ is ${\mathfrak{g}}$-integrable, it follows that the set of weights of $V$ is $W$- invariant and it is contained in $\lambda + \mu - Q^+$. Here $Q^+$ is a monoid generated by
simple roots, $W$ is the Weyl group corresponding to ${\mathfrak{g}}$. Let $m = (\lambda + \mu )(\alpha_i^{\vee}) + 1$. Then $(X_{-\alpha_i} f)^m (v_1 \otimes v_2) = 0$ in $V$, $\forall \, f \in A$. Indeed, if this element is not zero then
$\lambda + \mu - m\alpha_i$ is a weight of $V$ implying that $\lambda + \mu + \alpha_i$ is also a weight by the $W$-invariance property of weights. But this is a contradiction.
Let now $N = \mathrm{max}\{(\lambda + \mu)(\alpha_i^{\vee})+1 : 1 \leq i \leq l\}$. As $I^{N} + J^{N} = A$ by Lemma \ref{l2}, choose $f \in I^{N}$ and $g \in J^{N} $ such that $f + g = 1$.
Consider \begin{align*}
B &= (X_{- \alpha_i} f)^N (v_1 \otimes v_2)\ \\ & ={\displaystyle{\sum_{k_1 + k_2 = N}}} C_i(X_{-\alpha_i} f)^{k_1} v_1 \otimes (X_{-\alpha_i}f)^{k_2} v_2\in K, \end{align*} with some constants $C_i$. Since $(X_{-\alpha_i} f) v_1 = 0$, it follows that $B = v_1 \otimes (X_{-\alpha_i} f)^N v_2 \in K$ and $B = v_1 \otimes (X_{-\alpha_i} (1 - g))^N v_2 \in K.$ But $(X_{-\alpha_i} g) v_2 = 0$. Hence $v_1 \otimes X^N_{-\alpha_i} v_2 \in K \,.$ Let $n_0$ be the least positive integer such that $v_1 \otimes X_{-\alpha_i}^{n_0} v_2 \in K$. But then $$X_{\alpha_i}(v_1 \otimes X_{-\alpha_i}^{n_0} v_2 ) = v_1 \otimes X_{\alpha_i}X_{-\alpha_i}^{n_0} v_2 = (n_0(\gamma_i - n_0 + 1)) v_1 \otimes X_{-\alpha_i}^{n_0-1} v_2 \in K,$$ where $\gamma_i = \mu (\alpha_i^{\vee})$. By the minimality of $n_0$ it follows that $\gamma_i + 1= n_0$. Thus we have proved that $v_1 \otimes X_{-\alpha_i}^{\mu(\alpha_i^{\vee})+1} v_2 \in K , \,\,\forall \,\,i \,.$ Similarly we can prove that $X_{-\alpha_i}^{\lambda(\alpha_i^{\vee})+1} v_1 \otimes v_2 \in K.$ Now by earlier argument we can conclude
that $\tilde{K} \subseteq K$. This completes the proof of the claim.
\begin{crlre} \label{max}
Let $I$ be a cofinite ideal of $A$ such that $I = {\displaystyle{\bigcap_{i=1}^k}} {\mathfrak{m}}_i$, where ${\mathfrak{m}}_i$'s for $1 \leq i \leq k$ are distinct maximal ideals of $A$. Let $\psi_1, \cdots \psi_k$ be linear maps from $\widetilde{{\mathfrak{h}}} \to {\mathbb C}$ such that $\psi_i\mid_{{\mathfrak{h}}' \otimes {\mathfrak{m}}_i} = 0$ and $\psi_i\mid_{{\mathfrak{h}}} = \lambda_i$ a dominant integral weight. Put $\psi = {\displaystyle{\sum_{k}}}\psi_i$ and $\lambda = \sum \lambda_i$. Then $$W(\psi, I) \cong {\displaystyle{\bigotimes_{i=1}^k}}W(\psi_i, {\mathfrak{m}}_i).$$ \end{crlre}
{\bf{Remark}}: See \cite{CFK}, for the similar tensor decomposition theorem of
the local Weyl module for $\fma \otimes A$, where A is any commutative associative algebra with unit.
{\bf{Remark}}: Module $W(\psi, I)$ is $\widehat{{\mathfrak{g}}}$-integrable. In fact, $(X^{\lambda(\alpha_i^{\vee})+1}_{-\alpha_i} \otimes f) v = 0$ for $\alpha_i$ simple and $f \in A$. Indeed, suppose
it is non-zero. Then $\lambda - (\lambda (\alpha_i^{\vee}) + 1)\alpha_i$ is a weight of $W(\psi, I)$. Since the weight are $W$-invariant it follows that $\lambda +\alpha_i$ is a weight which is impossible.
We now construct irreducible quotients of Weyl modules hence proving their existence. Let $I$ be a cofinite ideal of $A$ such that $\displaystyle{I = \bigcap_{i = 1}^{p}{{\mathfrak{m}}_{i}}}$, where ${\mathfrak{m}}_i$ are distinct maximal ideals of $A$. Now as $A$ is finitely generated over ${\mathbb C}$, $A/{\mathfrak{m}}_i \cong {\mathbb C}$ for $1 \leq i \leq p$. So by Chinese reminder theorem, there is a surjective homomorphism from $A$ to $\displaystyle{\bigoplus_{i=1}^{p}{A/{\mathfrak{m}}_i}}$. Hence we have have a surjective homomorphism $\Phi : {\mathfrak{g}}' \otimes A \rightarrow \displaystyle{\bigoplus_{i = 1}^{p}{{\mathfrak{g}}' \otimes A/{\mathfrak{m}}_i}} \cong {\mathfrak{g}}'_p = {\mathfrak{g}}' \oplus \cdots \oplus {\mathfrak{g}}'$(p-times) by $\Phi(x \otimes a) = (a_1 x,a_2 x, \cdots,a_p x)$, where $(a_1,a_2, \cdots, a_p) \in {\mathbb C}^{p}$ is a image of $a$ from the map $A \rightarrow \bigoplus_{i=1}^{p}{A/{\mathfrak{m}}_i} \rightarrow {\mathbb C}^{p}$. Now for $1 \leq i \leq p$, let $\psi_i$ be a linear map from $\widetilde{{\mathfrak{h}}}$ to ${\mathbb C}$ such that $\psi_i \mid {\mathfrak{h}} = \lambda_i$ where $\lambda_i \in P^{+}$. Then as ${\displaystyle{\bigotimes_{i=1}^p}} V(\lambda_i)$ is an irreducible integrable module for ${\mathfrak{g}}_p'$, it is also irreducible ${\mathfrak{g}}' \otimes A$-module via $\Phi$ and so for $\stackrel{\sim}{{\mathfrak{g}}}$(${\mathfrak{h}}''$ acts on tensor product via $\psi$) and the vector $v_{\lambda_1}\otimes \cdots \otimes v_{\lambda_p}$ and $\psi =
\psi_1 + \cdots + \psi_p$ satisfy the the conditions of the definition \ref{dw} with $\displaystyle{I = \bigcap_{i = 1}^{p}{{\mathfrak{m}}_{i}}}$.
{\bf {Open Problem}} : Compute the character of $W(\psi, I)$ which is $W$-invariant.
By Corollary \ref{max} it is sufficient to compute the character of $W(\psi_i, {\mathfrak{m}}_i$), where ${\mathfrak{m}}_i$ is a maximal ideal.
\nocite*{}
School of mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India.\\ email: [email protected]
Instituto de Mathem\'atica e Estat\'\i stica, Universidade de S\~ao Paulo, S\~ao Paulo, Brasil.\\ email: [email protected]
School of mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India. email: [email protected]
\end{document} | arXiv |
Spatiotemporal multi-disease transmission dynamic measure for emerging diseases: an application to dengue and zika integrated surveillance in Thailand
Chawarat Rotejanaprasert ORCID: orcid.org/0000-0003-2623-00771,2,
Andrew B. Lawson3 &
Sopon Iamsirithaworn4
New emerging diseases are public health concerns in which policy makers have to make decisions in the presence of enormous uncertainty. This is an important challenge in terms of emergency preparation requiring the operation of effective surveillance systems. A key concept to investigate the dynamic of infectious diseases is the basic reproduction number. However it is difficult to be applicable in real situations due to the underlying theoretical assumptions.
In this paper we propose a robust and flexible methodology for estimating disease strength varying in space and time using an alternative measure of disease transmission within the hierarchical modeling framework. The proposed measure is also extended to allow for incorporating knowledge from related diseases to enhance performance of surveillance system.
A simulation was conducted to examine robustness of the proposed methodology and the simulation results demonstrate that the proposed method allows robust estimation of the disease strength across simulation scenarios. A real data example is provided of an integrative application of Dengue and Zika surveillance in Thailand. The real data example also shows that combining both diseases in an integrated analysis essentially decreases variability of model fitting.
The proposed methodology is robust in several simulated scenarios of spatiotemporal transmission force with computing flexibility and practical benefits. This development has potential for broad applicability as an alternative tool for integrated surveillance of emerging diseases such as Zika.
The nature of infectious diseases has been changing rapidly in conjunction with dramatic societal and environmental changes. This is a substantial challenge in terms of emergency preparedness requiring the implementation of a wide range of surveillance policies. The recent emergence of Zika outbreaks associated with birth defects prompted the World Health Organization (WHO) to declare a public health emergency of international concern in February 2016 [1]. After that, there has been an explosion in research and planning as the global health community has turned their attention to understanding and controlling Zika virus. However, the lack of important information needed to assess the global health threat from the virus still remains [2]. The behavior of an infectious disease is often formidable or sometimes not feasible to be evaluated by conducting experiments with real communities. As a result, mathematical models explaining the transmission of infectious diseases are a valuable tool for planning disease-management policies.
An important question when a new emerging disease occurs is the disease transmission mechanism and how infectious the disease is. A key concept in epidemiology to indicate the scale and speed of spread in a susceptible population is the transmissibility of the infection, characterized by the basic reproduction number, R0. This quantity has a definition of longer-term endemicity in a given population (R0 < 1 stops an epidemic) [3]. An extensive range of estimation methods have been proposed (see examples [4, 5]). Although the basic reproduction number can be useful for understanding the transmissibility of an infectious disease, the methods based on fitting deterministic transmission models are often difficult to use and generalize in practice due to context-specific assumptions which often do not hold [6, 7].
It is of practical importance to consider a computationally feasible and robust methodology to evaluate the force of infection. It has been proposed that the time course of an epidemic can be partly achieved by estimating the effective (instantaneous) reproduction number [8, 9]. However, since the contact rates among people may differ due to differences the local environment, human behavior, vector abundance, and, potentially, interactions with other viruses, disease transmissibility will vary across locations as well. Although spatial heterogeneity has been considered (see examples [10, 11]), the reproduction numbers are estimated separately for single areas without accounting for spatial variation and overdispersion in modeling. Due to very limited information when new emerging infection initially occurs, it is natural to look for strategies that mirror relevant information. Surveillance systems have been operated singly for various types of infectious diseases, however with multiple streams of geo-coded disease information available it is important to be able to take advantage of the multivariate health data. The benefits of multivariate surveillance lie in the ability to observe concurrency of patterns of disease and to allow conditioning of one disease on others. To assist public health practitioners to assess disease transmissibility in field settings, we thus develop the proposed methodology to allow for incorporating spatiotemporal knowledge from related diseases to enhance performance of surveillance system which was not considered in previous studies.
The aim of this study is to develop a generic and robust methodology for estimating spatiotemporally varying transmissibility that can be instantaneously computed for each location and time within a user-friendly environment for real-time surveillance. Not only our method has a practical interpretation with theoretical foundation but also can be understood and applied by non-technical users. The proposed method is defined in the next section with a simulation study to demonstrate robustness of the methodology. A case study is also provided of an application of integrative surveillance of Dengue and Zika virus activities in Thailand.
Spatiotemporal measure of disease transmission
The basic reproduction number is one of the principal concepts widely used as an epidemiological measure of the transmission potential which is theoretically defined as the number of secondary infections produced by a single infectious individuals in a susceptible population [3, 12]. However, not withstanding the issues with underlying theoretical assumptions such as population susceptibility and dynamic nature of infectious diseases, the basic reproduction number is difficult to apply in real situations. For instance, few epidemics are only ever observed when a new infection enters in a susceptible population in which the disease can also persist but had not been or able to be detected for a period of time. This situation violates a primary assumption about the 'at risk' population which commonly appears in new emerging diseases such as Zika. Another example is that the nature of infectious diseases is dynamic and should be consistently monitored whereas the basic reproduction number is an infinite measure which then fails to satisfy the dynamic behavior of infections. Therefore, we need to be cautious about the underlying model assumptions when applying the concept. Otherwise, it could lead to inappropriate disease-management policies or even uncontrollable outbreaks (see more discussion in [6, 7, 13]). In this paper we propose an alternative measure of spatiotemporally varying disease transmission, which we will call the surveillance reproduction number. Not affected by context-specific restrictions, this measure provides a practical interpretation that can be flexibly applied to many applications in infectious disease epidemiology. Moreover this measure which accounts for spatiotemporal heterogeneity can robustly estimate the disease strength simultaneously for all areal units and time periods, and is ideally suitable for emerging disease surveillance. To derive the proposed methodology, compartment modeling is reviewed as the foundation of our development.
There are various forms of compartmental models for infectious diseases (see examples in [5, 14]). One of the early modeling contributions is the Kermack-McKendrick model [15], a compartmental model with formulation of flow rates between disease stages of a population. A special case of the model is the well-known SIR (susceptible-infectious-recovered) model. A SIR model is usually used to describe a situation where a disease confers immunity against re-infection, to indicate that the passage of individuals is from the susceptible class S to the infective class I and to the removed class R. A common SIR model used to describe the disease at location i and time t can be specified as follows:
$$ {\displaystyle \begin{array}{l}\frac{dS_{it}}{dt}=-{a}_{it}{S}_{it}\\ {}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial l}\right){I}_{it}(l)={a}_{it}{S}_{it}-{b}_{it}{I}_{it}(l)\\ {}\frac{dM_{it}}{dt}={\int}_0^{\infty }{b}_{it}{I}_{it}(l) dl\end{array}} $$
where Iit(0) = aitSit. Denote the numbers of susceptible and recovered (removed) individuals by Sit and Mit. Note that we use M for the removed to avoid notation confusion with the surveillance reproduction number that will be constructed later. Iit(l) is the number of infected individuals at time t with the infectious period l. l is the time elapsed since infection which is the time period of being infectious since the person got infected. bit(l) is the recovery rate during l. ait is known as disease transmissibility at time t which is defined later. as \( {a}_{it}={\int}_0^{\infty }{c}_{it}(l){I}_{it}(l) dl \) where cit(l) is the rate of secondary transmission per single infectious case. Although infectious modeling is usually described in a preferential sampling setting in which locations are spatially modeled, one should be aware of possible bias due to the selective sampling scheme [16, 17]. Alternatively, our methodology is developed in a conditional framework by instead conditioning the aggregated count on a fixed areal unit such as county or health district.
The disease dynamic is assumed to follow a Poisson process such that the incidence rate at which someone infected in time t − 1 generates new infections in time step t at location i is μit. The relationship between the incidence rate μit and the prevalence Iit is assumed to be Iit(l) = hit(l)μit − l for t − l > 0 where hit(l) > 0, l > 0, is a proportional constant, and μit = Iit(0). The incidence rate, the number of susceptible individuals get infected, at location i and time t equals aitSit, i.e. μit = aitSit. The transmissibility \( {a}_{it}={\int}_0^{\infty }{c}_{it}(l){I}_{it}(l) dl \) can be seen as the force of infection or rate at which susceptible people get infected. For example, this quantity increases if a person has a respiratory disease and does not perform good hygiene during the course of infection or decreases if that person rests in bed. Then we have that
$$ {\mu}_{it}={a}_{it}{S}_{it}={\int}_0^{\infty }{c}_{it}(l){S}_{it}(l){h}_{it}(l){\mu}_{it-l} dl. $$
Let ζit(l) = cit(l)Sit(l)hit(l). ζit(l) reflects the reproductive power or effective contact rate between infectious and susceptible individuals at calendar time t, location i and infected time l.
To define and develop the surveillance reproduction number, Rs. it, we further assume that there exist two sets of functions, Rs, it = { Rs, it }, the set of surveillance reproduction numbers, and \( {\boldsymbol{G}}_{it}=\left\{\;{g}_{it}(l)|{\int}_0^{\infty }{g}_{it}(l)\ dl=1\;\right\} \), distributional functions over the infectious time at each location, such that ζit(l) can be decomposed into a product of those functions, i.e., ζit(l) = Rs, itgit(l). There are a number of functions in those sets satisfying the conditions. A non-trivial solution can be defined as
$$ {\int}_0^{\infty }{\zeta}_{it}(l)\; dl={\int}_0^{\infty }{R}_{s, it}{g}_{it}(l)\; dl={R}_{s, it}{\int}_0^{\infty }{g}_{it}(l)\; dl={R}_{s, it}. $$
However, this leads to the same issue as the basic reproductive number that we usually do not know the number of susceptible people for a given location and time which would not be very useful in field settings of emerging diseases. Hence we define the surveillance reproduction number in which \( {\mu}_{it}={\int}_0^{\infty }{\zeta}_{it}(l){\mu}_{it-l} dl \). That is
$$ {\mu}_{it}={\int}_0^{\infty }{R}_{s, it}{g}_{it}(l){\mu}_{it-l} dl={R}_{s, it}{\int}_0^{\infty }{g}_{it}(l){\mu}_{it-l} dl $$
and, therefore, we have that
$$ {R}_{s, it}=\frac{\mu_{it}}{\int_0^{\infty }{g}_{it}(l){\mu}_{it-l} dl}. $$
Since \( {\int}_0^{\infty }{g}_{it}(l)\ dl=1 \), Rs, it can also be interpreted as the ratio of the current incidence rate to the total (weighted sum) infectiousness of infected individuals. Because patient's information is often collected in a discrete fashion, then Rs, it can be estimated as \( {R}_{s, it}\approx \frac{\mu_{it}}{\sum_{l=1}^L{g}_{it}(l){\mu}_{it-l}} \) where L is the maximum period of infection. Thus this quantity represents force of infection as the number of secondary infected cases that each infected individual would infect averaged over their infectious lifespan in at location i during time t. However, it is hard to derive incidence density rates due to the lack of monitoring of individual new cases and real exposed population required during a given time period and location. Then we assume that \( {\mu}_{it}={h}_{it}^{\hbox{'}}{I}_{it} \) where \( {h}_{it}^{\hbox{'}}>0 \) is a proportional constant between prevalence and incidence at calendar time t and location i. Then the surveillance reproduction number can be expressed as
$$ {R}_{s, it}=\frac{h_{it}^{\hbox{'}}{I}_{it}}{\int_0^{\infty }{g}_{it}(l){h}_{it-l}^{\hbox{'}}{I}_{it-l} dl}\approx \frac{I_{it}}{\int_0^{\infty }{g}_{it}(l){I}_{it-l} dl}. $$
Hence, to estimate the surveillance number with prevalence, the ratio of incidence and prevalence is assumed to be nearly constant over time. This is a limitation of our development. This assumption may not be appropriate for long duration diseases such as chronic conditions but rather suitable for infections with relatively short duration.
The proposed methodology has practical advantages over the traditional basic reproduction number. One of which is that our method is based on prevalence, not affected by the assumption about susceptibility which is often difficult or infeasible to know. Another, since our metric is dynamic (does not depend on the infinite definition), it can be sequentially calculated which is very suitable for monitoring the disease strength, ideally for emerging diseases.
To account for spatiotemporal variation and overdispersion, μit is modeled to link to a linear predictor consisting of local variables such as environmental and demographic factors and space-time random effects to account for spatiotemporal heterogeneity as log(μit) = α + Xitβit + ui + vi + λt + δit. The correlated (ui) and uncorrelated (vi) spatial components have an intrinsic conditional autoregressive (ICAR) prior distribution and zero mean Gaussian distribution respectively. In addition, there are a separate temporal random effect (λt) and a space-time interaction term (δit) in the linear predictor. Often the temporal effect is described using an autoregressive prior distribution such allowing for a type of nonparametric temporal effect. Note that this distribution is a random walk prior distribution with one-unit lag. For the interaction term, the prior structure is usually assumed to be distributed as a zero mean Gaussian distribution. The estimate of the reproduction number is however also dependent on the choice of the infectiousness profile, git(l), which assumes to be Log-Normal distributed and standardized to sum to one.
Let yit be the number of new cases at location i time t and the disease transmission is presumably modeled with a Poisson process. However, the cases are usually reported at a discrete time such as weekly or monthly. Assuming the transmissibility remains in the time interval (t, t + 1], the incidence at location i time t is Poisson distributed with mean μit. Then the full model specification is as follows:
$$ {\displaystyle \begin{array}{l}{y}_{it}\sim Poisson\left({\mu}_{it}\right)\\ {}\log \left({\mu}_{it}\right)=\alpha +{\boldsymbol{X}}_{it}{\boldsymbol{\beta}}_{it}+{u}_i+{v}_i+{\lambda}_t+{\delta}_{it}\kern1em \\ {}\alpha \sim N\left(0,{\tau}_{\alpha}^{-1}\right);{\beta}_{it}\sim N\left(0,{\tau}_{\beta}^{-1}\right)\\ {}{u}_i\sim ICAR\left({\tau}_u^{-1}\right);{v}_i\sim N\left(0,{\tau}_v^{-1}\right)\\ {}{\lambda}_t\sim N\left({\lambda}_{t-1},{\tau}_{\lambda}^{-1}\right)\\ {}{\delta}_{it}\sim N\left(0,{\tau}_{\delta}^{-1}\right)\end{array}}\kern0.5em {\displaystyle \begin{array}{l}{R}_{s, it}=\frac{\mu_{it}}{\sum_{l=1}^L{g}_{il}{\mu}_{it-l}}\\ {}{g}_{il}=\frac{\exp \left({w}_{il}\right)}{\sum_{l=1}^L\exp \left({w}_{il}\right)}\\ {}{w}_{il}\sim N\left(0,{\tau}_w^{-1}\right)\\ {}{\tau}_{\ast}^{-1/2}\sim Unif\left(0,10\right).\end{array}} $$
To evaluate our proposed methodology, we simulate data without covariates in several situations with different magnitudes of transmissibility. The simulation map used as a basis for our evaluation is the district map of the province of Bangkok, Thailand. This province has 50 districts (i = 1–50) with a reasonably regular spatial distribution. The simulated incidence are generated for 20 weeks (t = 1–20) in four district groups with different levels of the reproduction numbers. Figure 1 displays the maps showing locations of simulated Rs of each district group at weeks 5, 10, 15, and 20. The simulated incidence of each district group with different levels of Rs is shown in Fig. 2 in which each dot represents a simulated value from a given simulation set. The first group (middle region in Fig. 1) is simulated with increasing magnitudes of transmission as Rs, it = 0.2 + (t × 0.15). The Rs, it is assumed to be increasing every time period by the size of 0.15. Then incidence with an exponentially increase is generated in this scenario to represent regions with an outbreak (left panel in Fig. 2). The second district group (western region in Fig. 1) is assumed to have decreasing magnitudes simulated as Rs, it = 4.0 − (t × 0.2). As can be seen in Fig. 2 (second panel from the left), the incidence in this scenario increases at the beginning due to strongly positive force of infection but will be decreasing afterwards. In the third scenario (eastern region in Fig. 1), Rs, it is assumed to have the size of 1.5 until week 12 and then reduced to 0.8 afterwards. This scenario represents the situation where an effective intervention is introduced to control an outbreak. The rest of the districts are assumed to have a constant low infection rate at Rs, it = 0.8 over the 20 time periods. To sample the discrete weight wil is drawn from a normal distribution with mean of 1.5 with standard deviation of one. The infectious time, L, of 3 weeks is set to generate the incidence.
Bangkok maps of simulated Rs during weeks 5, 10, 15, and 20 (left-right)
Simulated incidence of districts in group 1 (increasing Rs), group 2 (decreasing Rs), group 3 (with a jump) and groups 4 (constant Rs = 0.8)
We generate 100 simulated incidence datasets starting with the number of newly infected people as 2, 1, and 6 for the first 3 weeks. For weeks t > 3, the new cases yit are sampled from a Poisson distribution for each location with mean \( {\mu}_{it}={R}_{s, it}{\sum}_{l=1}^3{g}_{il}{\mu}_{it-l} \). That is \( {\mu}_{i1}=2;{\mu}_{i2}=1;{\mu}_{i3}=6;{\mu}_{it}={R}_{s, it}{\sum}_{l=1}^t{g}_{il}{\mu}_{it-l},t>3 \). The infectious time interval is also evaluated in the simulation study to examine the effect of different window sizes. We investigate the sensitivity of the window choice by assuming L = 2, 3 and 4 weeks in the simulation study because the infectious period of arthropod-borne diseases such as Dengue and Zika usually lasts for a couple of weeks [18]. The results displayed are from posterior sampling carried out on WinBUGS, user-friendly software, using MCMC with an initial burn-in period of 100,000 iterations to assess the convergence of MCMC chains.
The simulated and corresponding estimated Rs for each district group with different infectious times are shown in Fig. 3. Our methodology allows estimating a constant surveillance reproduction number used for simulation in scenario 4. The constant changes in Rs are detected in both increasing (scenario 1) and decreasing (scenario 2) force of infection. It also can effectively identify a jump in transmissibility (scenario 3). Figure 4 (top row) displays the mean squared error (MSE) of the estimated surveillance numbers in all scenarios and the correct infectious time, L = 3, yields the most precise estimate (the least MSE).
Plots of the posterior estimated Rs of all district groups with infectious periods of 2 (top), 3 (middle), and 4 (bottom) weeks from all simulated datasets. The black lines show the estimated mean with dash lines showing the 95% credible interval. The grey lines display posterior realizations and the red lines are the true Rs used for simulation
Bar plots of MSE and MSPRE of the estimated and predicted reproduction number with different infectious times of four district groups
The estimate of the surveillance number also depends on the choice of the time window size L. However, it may not be feasible to know the true infection time in real situations. Then we examine a loss function metric which employs the predictive distribution to guide selecting the infectious profile. A commonly used loss function is the mean squared predictive error (MSPE) [19] comparing the observed data to the predicted values from a fitted model. However we are interested in the loss function in estimation of Rs . We then propose another predictive loss function, the mean squared predictive reproduction error (MSPRE), to evaluate the model predictive adequacy in terms of reproduction number defined as.
\( {MSPRE}_{it}={\sum}_{n=1}^N{\left({R}_{s, itn}^y-{R}_{s, itn}^{y^{pred}}\right)}^2/N \), where \( {R}_{s, itn}^y=\frac{y_{it}}{\sum_{l=1}^L{g}_{inl}{y}_{it-l}} \), \( {R}_{s, itn}^{y^{pred}}=\frac{y_{int}^{pred}}{\sum_{l=1}^L{g}_{inl}{y}_{\mathit{\operatorname{int}}-l}^{pred}} \), N is the size of posterior sampler and \( {y}_{itn}^{pred} \) is generated from its posterior predictive distribution at the n th posterior sampling after burn-in period. Figure 4 (bottom row) presents the MSPRE for district groups corresponding to different choices of infection time window. We can see the infectious time of 3 weeks has both the least MSE and MSPRE. Thus this metric can provide guidance on which the time windows to consider in practice. The use of MSPRE will also be demonstrated in a case study provided later. It should be noted that the time elapsed since infection, l, could vary by individual. However we model the aggregated count conditional on spatial units instead of at individual level. Then the infectious time would be averaged over an area. Given the sampling framework it is reasonable to assume a constant infectious time for the population. Nonetheless it is also possible that the infectious time has a spatiotemporal distribution over the study area which is perhaps dependent on environmental or demographic variables. Then the covariates should be included in the model when available as well.
As presented we have developed a robust methodology to estimate disease transmissibility varying across locations and time periods. Our method allows for computational flexibility not affected by conventional restrictions which generally are difficult to apply in real situations. However due to very limited knowledge when new emerging infection initially occurs, it is extremely challenging for policy makers to make decision based upon enormous uncertainty. Therefore it is essential to consider the analysis integrating relevant information streams in order to develop the best disease-management plans possible. Hence in the next section we extend the univariate framework to allow for incorporating knowledge from related diseases.
Multivariate surveillance reproduction number
Limitations in availability of disease information constrain public health efforts to prevent and control outbreaks. Thus utilizing knowledge, we have from other sources such as related diseases can principally help improving the surveillance system. Dengue is one of the major arthropod-borne diseases in tropical and sub-tropical regions. The virus belongs to the genus Flavivirus and is primarily transmitted by Aedes mosquitoes as well as Zika. Similarity in virological characteristics and identification as etiologic agents of the similar illness and their co-infection suggest that these 2 Aedes mosquito-transmitted viruses can be circulating in the same area confirming the underlying potential for Zika to have a similar spreading pattern to Dengue [20, 21]. Therefore, in this section we develop a multivariate transmissibility measure allowing for transferring of spatiotemporal knowledge of these two flaviviruses in order to maximize the surveillance ability which was not considered in previous literature.
In the multi-disease surveillance setting, spatial data on multiple diseases are observed at each time period. We assume that yit and \( {y}_{it}^s \) are the number of new Zika and Dengue (with superscript) cases which are Poisson distributed with means μit and \( {\mu}_{it}^s \) for each area i and time t. Using a logarithmic link function, α and αs are the overall intercepts, and Xitβit and \( {\boldsymbol{X}}_{it}^s{\boldsymbol{\beta}}_{it}^s \) are the covariate predictors for both diseases. In general, once multiple diseases are introduced into an analysis there is a need to consider relations between the diseases. This can be achieved in various ways. A basic approach to this is to consider cross-correlation between the diseases [22, 23]. There is a numerous literature in the specification of cross-disease modeling using Gaussian process [24]. The multivariate conditional autoregressive (MCAR) model [25] and the shared component model [26] are the two primary approaches to model disease risk correlations across both spatial units and diseases. Here we use an extended version of the convolution model [27] to incorporate diseases' correlation using multivariate spatially-correlated, ui and \( {u}_i^s \), and non-correlated, vi and \( {v}_i^s \), random effects to account for unobserved confounders and spatial variation. To capture the temporal trend a multivariate autoregressive prior distribution, which allows for sharing the temporal information between the diseases, is assumed for the temporal random effects, λt and \( {\lambda}_t^s \). In addition, there is a space-time interaction term for each disease, δit and \( {\delta}_{it}^s \), which are assumed to have a Gaussian prior distribution. Finally infectivity profiles gil and \( {g}_{il}^s \) are jointly approximated by a standardized multivariate Log-Normal distribution. A multivariate extension of the reproduction number, Rms, it, incorporating information from both diseases can be defined as \( {R}_{ms, it}=\frac{\mu_{it}}{\int_0^{\infty }{g}_{it}(l){\mu}_{it-l} dl}\approx \frac{\mu_{it}}{\sum_{l=1}^L{g}_{il}{\mu}_{it-l}} \) and \( {R}_{ms, it}^s=\frac{\mu_{it}^s}{\int_0^{\infty }{g}_{it}^s(l){\mu}_{it-l}^s dl}\approx \frac{\mu_{it}^s}{\sum_{l=1}^L{g}_{il}^s{\mu}_{it-l}^s} \). Then full specification of the joint modeling of Zika and Dengue is following:
$$ {\displaystyle \begin{array}{l}{y}_{it}\sim Poisson\left({\mu}_{it}\right);{y}_{it}^s\sim Poisson\left({\mu}_{it}^s\right)\\ {}\log \left({\mu}_{it}\right)=\alpha +{X}_{it}{\beta}_{it}+{u}_i+{v}_i+{\lambda}_t+{\delta}_{it}\\ {}\log \left({u}_{it}^s\right)={\alpha}^s+{X}_{it}^s{\beta}_{it}^s+{u}_i^s+{v}_i^s+{\lambda}_t^s+{\delta}_{it}^s\\ {}\alpha \sim N\left(0,{\tau}_{\alpha}^{-1}\right);{\alpha}^s\sim N\left(0,{\tau}_{\alpha^s}^{-1}\right)\\ {}{\beta}_{it}\sim N\left(0,{\tau}_{\beta}^{-1}\right);{\beta}_{it}^s\sim N\left(0,{\tau}_{\beta^s}^{-1}\right)\\ {}\left[\begin{array}{c}{u}_i\\ {}{u}_i^s\end{array}\right]\sim MCAR\left({\sum}_u^{-1}\right);\left[\begin{array}{c}{v}_i\\ {}{v}_i^s\end{array}\right]\sim MVN\left(\begin{array}{c}0\\ {}0\end{array},{\sum}_v^{-1}\right)\\ {}\left[\begin{array}{c}{\lambda}_t\\ {}{\lambda}_t^s\end{array}\right]\sim MVN\left(\begin{array}{c}{\lambda}_{t-1}\\ {}{\lambda}_{t-1}^s\end{array},{\sum}_{\lambda}^{-1}\right)\end{array}}\kern0.5em {\displaystyle \begin{array}{l}{\delta}_{it}\sim N\left(0,{\tau}_{\delta}^{-1}\right);{\delta}_{it}^s\sim N\left(0,{\tau}_{\delta^s}^{-1}\right)\\ {}{R}_{ms, it}=\frac{\mu_{it}}{\sum_{l=1}^L{g}_{il}{\mu}_{it-l}};{R}_{ms, it}^s=\frac{\mu_{it}^s}{\sum_{l=1}^L{g}_{il}^s{\mu}_{it-l}^s}\\ {}{g}_{il}=\frac{\exp \left({w}_{il}\right)}{\sum_{l=1}^L\exp \left({w}_{il}\right)}{g}_{il}^s=\frac{\exp \left({w}_{il}^s\right)}{\sum_{l=1}^L\exp \left({w}_{il}^s\right)}\\ {}\left[\begin{array}{c}{w}_{il}\\ {}{w}_{il}^s\end{array}\right]\sim MVN\left(\begin{array}{c}0\\ {}0\end{array},{\Sigma}_w^{-1}\right)\\ {}{\tau}_{\ast}^{-1/2}\sim Unif\left(0,10\right).\end{array}} $$
Application to dengue and Zika virus surveillance activities in Thailand
Dengue is endemic in Thailand with peak transmission rates occur in the rainy season, between June and September, all across the country, but particularly in northeastern Thailand. Zika was first reported in Thailand in 2012, and the Bangkok Metropolitan Authority has been conducting regular screen tests on its residents since then. To demonstrate performance of the proposed integrative method we apply the multivariate surveillance number, Rms, to the Dengue and Zika prevalence in Thailand. The cases of both diseases were from the province of Chanthaburi consisting of 10 health districts during July 10th until August 27th 2016, total of 7 weeks. The information of case patients was reported by the public hospitals to surveillance center. Note that the dengue patients included in this analysis were both who diagnosed with Dengue fever (DF) and Dengue hemorrhagic fever (DHF). The results displayed are based on the approximation of the surveillance number developed using prevalence in (6) and posterior sampling carried out using WinBUGS software an initial burn-in period of 100,000 iterations to assess the convergence of MCMC chains.
The estimates of the surveillance numbers are expected to depend on the choice of the size of infectious time l. The Aedes aegypti mosquito is the primary vector of Dengue. The virus is transmitted to humans through the bites of infected female mosquitoes. After virus incubation for 4–10 days, an infected mosquito is capable of transmitting the virus for the rest of its life. Infected symptomatic or asymptomatic humans are the main carriers and multipliers of the virus, serving as a source of the virus for uninfected mosquitoes. Patients who are already infected with the dengue virus can transmit the infection (for 4–5 days; maximum 12 days) via Aedes mosquitoes after their first symptoms appear. Zika is usually milder with symptoms lasting for several days to a week. People usually don't get sick enough to go to the hospital, and they very rarely die of Zika [18].
MSPRE is applied to guide on the choice of infectious time for the model. Table 1 displays the values of MSPRE of both diseases fitted with different sizes of the infectious times. The window size of 2 weeks fitted with the univariate model yields the least MSPRE for Zika and Dengue. Though based on the clinical manifestation point of view a window size less than 2 weeks may be possible for Zika, the result suggested by MSPRE is sensible combining with knowledge from epidemiological perspective that incubation period and virus lifespan in a mosquito can prolong the infectious period. Nonetheless we have only weekly data and would recommend using a finer temporal scale if appropriate when such data are available. To evaluate the performance, we compare the univariate and multivariate models weekly so that both models would have the same set of data. Based on the guidance from MSPRE and information discussed earlier, we thus choose the size of infectious time to be 2 weeks for both diseases in the analysis. Because the infectious times are assumed to be 2 weeks, for simplicity, we assume the weights of serial intervals to have a Beta (1,1) prior distribution instead of standardized log-normal distribution. All covariance matrices are assumed to be a fixed matrix of \( \left[\begin{array}{cc}100& 0\\ {}0& 100\end{array}\right] \) which however could also be modeled with a Wishart distribution as well.
Table 1 MSPRE of Dengue and Zika fitted with the univariate model for different time windows
Table 2 presents DIC values obtained from the analysis with both the univariate and multivariate models during weeks 4–7. The DIC of the multivariate model is less than the ones from each disease fitted separately by the univariate model across all time periods. Moreover, pD, which can be seen as model complexity, is also much smaller in the case of multivariate model. This suggests that pooling information from both diseases in the analysis essentially decreases variability in model fitting and provides a much better description of the spreading pattern of Zika and Dengue.
Table 2 DIC (pD) values for Dengue and Zika fitted with univariate and multivariate models during weeks 4–7
The estimated Rs and Rms describe the pattern of Dengue transmission similarly. However Rms (bottom row, the second column in Fig. 5) of Dengue, which also infuses information of Zika pattern in the integrative platform, provides a smaller transmission rate than Rs (middle row, the second column in Fig. 5) at a district in the south during the week of August 7th – August 13th. This is because during the same week the number of Zika incidence at that district (the first row, second column in Fig. 6) was decreasing from the previous week (the first row, first column in Fig. 6). On the other hand, during the week of August 7th – August 13th Dengue incidence was increasing from the previous week. Hence the Rms of Zika estimated from the multivariate model suggests a higher estimate of the disease strength than from the univariate model. These results demonstrate that the proposed integrated model allows for transferring transmission knowledge between the related diseases to optimize surveillance ability.
Maps of weekly Dengue incidence (top), Rs (middle), and Rms (bottom) in Chantaburi during July 31st – August 27th 2016
Maps of weekly Zika incidence (top), Rs (middle), and Rms (bottom) in Chantaburi during July 31st – August 27th 2016
A new emerging disease can occur in one place and have the potential to spread globally. This is an important challenge in terms of emergency preparation requiring the operation of surveillance systems. A traditional concept to study the dynamic of infectious diseases is the basic reproduction number. However, the intuitive appeal of its theoretical interpretation can outlast the appropriateness of situations if applied incautiously. So it is remarkably crucial to be aware of their caveats when adopting that measure. Otherwise, it could mislead to the inappropriate backbone of disease-management policy. Alternatively, we present a robust and flexible methodology for estimating spatiotemporally varying reproduction numbers. Withstanding the issues of context-specific assumptions, our method provides more practical advantages and can be used to simultaneously estimate disease transmissibility for each location and time within a user-friendly environment for real-time assessment of new emerging diseases.
To evaluate our method, we simulate data in several situations with different magnitudes of transmission and sizes of infectious period. The simulation results suggest that the proposed method allows robust estimation of the surveillance reproduction number used for simulation across simulated scenarios. Though from the simulation study the method would not suffer much from the infection window size, MSPRE may be helpful in providing guidance on the choice of infectious period in practice. Due to limited information when new emerging infection newly occurs, the univariate framework is extended to allow for incorporating knowledge from related diseases in order to maximize the surveillance capability. A case study is provided of an integrative application of Dengue and Zika surveillance in Thailand.
A significant portion of arbovirus incidence (eg. ZIKA) is underestimated due to asymptomatic infection without presenting any clinical symptoms [28]. Nevertheless, the contribution of asymptomatic reservoirs to the overall disease burden has not been well quantified, which introduces considerable uncertainty into modeling studies of disease transmission dynamics and control strategies. Policy and practice on case detection and reporting of Dengue and Zika is a critical factor due to the nature of diseases that have high proportion of asymptomatic infection. Therefore novel surveillance tools, such as integrated surveillance, should be developed and applied to improve estimates of disease incidence especially asymptomatic infections such as Dengue and Zika.
The significance of our development lies in the advantages of multivariate surveillance in the ability to borrow strength across diseases and to allow conditioning of one on others. When applying the multivariate framework, the relevant diseases should epidemiologically and clinically sound. Studies indicate the underlying possibility for Zika to have a similar spreading pattern to Dengue [2, 20]. We hence extend our method to allow for integrating related diseases' information and also demonstrate its performance in the example of Dengue and Zika surveillance in Thailand. The data example shows that combining both diseases in an integrated analysis essentially decreases variability of model fitting. The result suggests that the proposed integrative platform which allows for transferring transmission knowledge between the related diseases sharing similar etiology not only can enhance the estimation of transmissibility but also helps explaining the spreading pattern of Zika and Dengue much better. This is a significant importance of the proposed multi-disease measure in improving surveillance ability.
Though the proposed method demonstrates robust performance, it should be noted that those data present a lot of both clinical and epidemiological complexity. In this work we prevalence information is assumed for the model due to difficulties of disease investigation which implies that the ratio of incidence and prevalence is nearly constant over time. This is a limitation of our development. This assumption may not be appropriate for chronic diseases but rather suitable for infections with relatively short duration. There is a further need for studies of virus circulation persistence and the ecological factors including characterizing immunological cross-reacting which could shorten or prolong the epidemic [29]. Across both clinical and ecological studies, it is also important to evaluate the effect of host, viral, and vector relationships for fuller understanding of the disease mechanism [18]. However, the proposed methodology can be served as a flexible platform to incorporate those potential epidemiologic and ecologic determinants that drive the disease risk as they are available.
New emerging diseases are public health crises in which policy makers have had to make decisions in the presence of massive uncertainty. As presented the proposed methodology is robust in several simulated scenarios of transmission force with computing flexibility and practical benefits. Thus this development is ideally suitable for surveillance applications of new emerging diseases such as Zika. To further prevent and control new emerging infection, we must have a fuller understand the modes of transmission which are currently lacking. In such context, it is natural to look for strategies that mirror those applied for relevant diseases. By transferring information from diseases sharing the similar etiology such as Dengue, our multivariate framework can successfully integrate knowledge and hence improve the surveillance system effectively. Therefore, in current situations whereas there are threats from new infection, a robust and flexible platform is thus essentially needed to be readily prepared in order to rapidly gain an understanding of the new disease transmission mechanism to counter the local and global health concerns.
The data that support the findings of this study were obtained from the Thai Bureau of Epidemiology, but restrictions apply to the availability of these data, which were used with permission for the current study, and are therefore not publicly available. However, data may be available from the authors upon reasonable request and with permission of the Thai Bureau of Epidemiology.
DF:
DHF:
Dengue hemorrhagic fever
DIC:
Deviance Information Criterion
ICAR:
Intrinsic Conditional Autoregressive model
MCAR:
Multivariate Conditional Autoregressive model
MCMC:
MSE:
MSPE:
Mean Squared Predictive Error
MVN:
Multivariate Normal distribution
R 0 :
Basic reproduction number
R ms :
R s :
Surveillance reproduction number
Lessler J, Chaisson LH, Kucirka LM, Bi Q, Grantz K, Salje H, Cummings DA. Assessing the global threat from Zika virus. Science. 2016;353(6300).
Ferguson NM, et al. Countering the Zika epidemic in Latin America. Science. 2016;353(6297):353–4.
Diekmann O, Heesterbeek JAP. Mathematical epidemiology of infectious diseases: model building, analysis, and interpretation. Chichester: Wiley; 2000.
Dietz K. The estimation of the basic reproduction number for infectious diseases. Stat Methods Med Res. 1993;2(1):23–41.
Brauer F. Compartmental models in epidemiology. In Mathematical epidemiology. Berlin, Heidelberg: Springer; 2008. (pp. 19-79).
Li J, Blakeley D, Smith RJ. The Failure of R (0). In: Computational and mathematical methods in medicine, 2011; 2011. p. 527610.
Heffernan JM, Smith RJ, Wahl LM. Perspectives on the basic reproductive ratio. J R Soc Interface. 2005;2(4):281–93.
Fraser C. Estimating individual and household reproduction numbers in an emerging epidemic. PLoS One. 2007;2(8):e758.
Cori A, et al. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am J Epidemiol. 2013;178(9):1505–12.
Keeling MJ. The effects of local spatial structure on epidemiological invasions. Proc R Soc Lond B Biol Sci. 1999;266(1421):859–67.
Chowell G, et al. Estimation of the reproduction number of dengue fever from spatial epidemic data. Math Biosci. 2007;208(2):571–89.
Nishiura H. Correcting the actual reproduction number: a simple method to estimate R0 from early epidemic growth data. Int J Environ Res Public Health. 2010;7(1):291–302.
Roberts M. The pluses and minuses of 0. J R Soc Interface. 2007;4(16):949–61.
Hethcote HW. The mathematics of infectious diseases. SIAM Rev. 2000;42(4):599–653.
Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Proceedings of the royal society of london. Series A, Containing papers of a mathematical and physical character. 1927;115(772):700–21.
Diggle PJ, Menezes R, Su Tl. Geostatistical inference under preferential sampling. J R Stat Soc Ser C Appl Stat. 2010;59(2):191–232.
Gelfand AE, Sahu SK, Holland DM. On the effect of preferential sampling in spatial prediction. Environmetrics. 2012;23(7):565–78.
Hamel R, Liégeois F, Wichit S, Pompon J, Diop F, Talignani L, Missé D. Zika virus: epidemiology, clinical features and host-virus interactions. Microbes and Infection. 2016;18(7-8):441-9.
Gelfand AE, Ghosh SK. Model choice: a minimum posterior predictive loss approach. Biometrika. 1998;85(1):1–11.
Dupont-Rouzeyrol M, et al. Co-infection with Zika and dengue viruses in 2 patients, New Caledonia, 2014. Emerg Infect Dis. 2015;21(2):381–2.
Cardoso CW, et al. Outbreak of exanthematous illness associated with Zika, chikungunya, and dengue viruses, Salvador, Brazil. Emerg Infect Dis. 2015;21(12):2274.
Lawson A. Statistical Methods in Spatial Epidemiology. Somerset: Wiley; 2013.
Lawson A, et al. Handbook of Spatial Epidemiology. Boca Raton (Fla.): Chapman & Hall/CRC; 2016.
Banerjee S, Carlin B, Gelfand A. Hierarchical modeling and analysis for spatial data. Boca Raton (Fla.): Chapman & Hall/CRC.; 2015.
Gelfand AE, Vounatsou P. Proper multivariate conditional autoregressive models for spatial data analysis. Biostatistics. 2003;4(1):11–5.
Knorr-Held L, Best NG. A shared component model for detecting joint and selective clustering of two diseases. J R Stat Soc A Stat Soc. 2001;164(1):73–85.
Besag J, York J, Mollié A. Bayesian image restoration, with two applications in spatial statistics. Ann Inst Stat Math. 1991;43(1):1–20.
Moghadas SM, et al. Asymptomatic transmission and the dynamics of Zika infection. Sci Rep. 2017;7(1):5829.
Dejnirattisai W, et al. Dengue virus sero-cross-reactivity drives antibody-dependent enhancement of infection with zika virus. Nat Immunol. 2016;17(9):1102–8.
We would like to thank Dr. Saranath Lawpoolsri for assistance with the epidemiological interpretation. We are also thankful for constructive suggestions from reviewers to improve our manuscript.
This research was supported by the new researcher grant of Mahidol University and ICTM grant from the Faculty of Tropical Medicine. The funding body had no role in the design or analysis of the study, interpretation of results, or writing of the manuscript.
Department of Tropical Hygiene, Faculty of Tropical Medicine, Mahidol University, Ratchathewi, Bangkok, 10400, Thailand
Chawarat Rotejanaprasert
Mahidol-Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, 10400, Thailand
Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, 29425, USA
Andrew B. Lawson
Department of Disease Control, Ministry of Public Health, Nonthaburi, 11000, Thailand
Sopon Iamsirithaworn
Search for Chawarat Rotejanaprasert in:
Search for Andrew B. Lawson in:
Search for Sopon Iamsirithaworn in:
All authors contributed to the conceptual design of the study. CR developed the statistical methodology with critical input from AL and SI. CR completed all statistical analyses and drafted the manuscript. SI was responsible for clinical revision and improvements of the manuscript. CR and AL contributed to the manuscript editing. All authors have read and approved the final manuscript.
Correspondence to Chawarat Rotejanaprasert.
The research was approved by the ethics committee of the Faculty of Tropical Medicine, Mahidol University.
Rotejanaprasert, C., Lawson, A.B. & Iamsirithaworn, S. Spatiotemporal multi-disease transmission dynamic measure for emerging diseases: an application to dengue and zika integrated surveillance in Thailand. BMC Med Res Methodol 19, 200 (2019) doi:10.1186/s12874-019-0833-6
Spatiotemporal
Data analysis, statistics and modelling | CommonCrawl |
Difference between revisions of "Past Probability Seminars Spring 2020"
Pmwood (talk | contribs)
(→Thursday, April 6, 2017, Thomas Wooley, [https://www.maths.ox.ac.uk/ Oxford)
Latest revision as of 16:18, 12 August 2020 (view source)
Vadicgor (talk | contribs)
m (Vadicgor moved page Probability Seminar to Past Probability Seminars Spring 2020)
(323 intermediate revisions by 7 users not shown)
= Spring 2017 =
<b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted.
<b>We usually end for questions at 3:15 PM.</b>
If you would like to sign up for the email list to receive seminar announcements then please send an email to [email protected].
If you would like to sign up for the email list to receive seminar announcements then please send an email to
[mailto:[email protected] [email protected]]
<!-- == Monday, January 9, [http://www.stat.berkeley.edu/~racz/ Miklos Racz], Microsoft Research ==-->
== <span style="color:red"> Monday</span>, January 9, <span style="color:red"> 4pm, B233 Van Vleck </span> [http://www.stat.berkeley.edu/~racz/ Miklos Racz], Microsoft Research ==
<div style="width:320px;height:50px;border:5px solid black">
<b><span style="color:red"> Please note the unusual day and time </span></b>
Title: '''Statistical inference in networks and genomics'''
From networks to genomics, large amounts of data are increasingly available and play critical roles in helping us understand complex systems. Statistical inference is crucial in discovering the underlying structures present in these systems, whether this concerns the time evolution of a network, an underlying geometric structure, or reconstructing a DNA sequence from partial and noisy information. In this talk I will discuss several fundamental detection and estimation problems in these areas.
I will present an overview of recent developments in source detection and estimation in randomly growing graphs. For example, can one detect the influence of the initial seed graph? How good are root-finding algorithms? I will also discuss inference in random geometric graphs: can one detect and estimate an underlying high-dimensional geometric structure? Finally, I will discuss statistical error correction algorithms for DNA sequencing that are motivated by DNA storage, which aims to use synthetic DNA as a high-density, durable, and easy-to-manipulate storage medium of digital data.
== Thursday, January 19, TBA ==
== Thursday, January 26, [http://mathematics.stanford.edu/people/name/erik-bates/ Erik Bates], [http://mathematics.stanford.edu/ Stanford] ==
Title: '''The endpoint distribution of directed polymers'''
Abstract: On the d-dimensional integer lattice, directed polymers are paths of a random walk in random environment, except that the environment updates at each time step. The result is a statistical mechanical system, whose qualitative behavior is governed by a temperature parameter and the law of the environment. Historically, the phase transitions have been best understood by whether or not the path's endpoint localizes. While the endpoint is no longer a Markov process as in a random walk, its quenched distribution is. The key difficulty is that the space of measures is too large for one to expect convergence results. By adapting methods recently used by Mukherjee and Varadhan, we develop a compactification theory to resolve the issue. In this talk, we will discuss this intriguing abstraction, as well as new concrete theorems it allows us to prove for directed polymers.
This talk is based on joint work with Sourav Chatterjee.
== Thursday, 2/2/2017, TBA ==
== Thursday, February 9, TBA ==
== Thursday, 2/16/2017, TBA ==
== January 23, 2020, [https://www.math.wisc.edu/~seppalai/ Timo Seppalainen] (UW Madison) ==
'''Non-existence of bi-infinite geodesics in the exponential corner growth model
== Thursday, February 23, [http://www.math.wisc.edu/~jeanluc/ Jean-Luc Thiffeault], [http://www.math.wisc.edu/ UW-Madison] ==
'''Title:''' Heat Exchange and Exit Times
A heat exchanger can be modeled as a closed domain containing an incompressible fluid. The fluid has some temperature distribution obeying the advection-diffusion equation, with zero temperature boundary conditions at the walls. The goal is then to start from some initial positive heat distribution, and to flux it through the walls as fast as possible. Even for a steady flow, this is a time-dependent problem, which can be hard to optimize. Instead, we consider the mean exit time of Brownian particles starting from inside the domain. A flow favorable to heat exchange should lower the exit time, and so we minimize some norm of the exit time over incompressible flows (drifts) with a given energy. This is a simpler, time-independent optimization problem, which we then proceed to solve analytically in some limits, and numerically otherwise.
<!--== Thursday, March 2, [http://people.maths.ox.ac.uk/woolley/ Thomas Wooley], [https://www.maths.ox.ac.uk/ Oxford] ==-->
== Thursday, March 16, [http://www-users.math.umn.edu/~wkchen/ Wei-Kuo Chen], [http://math.umn.edu/ Minnesota] ==
Title: '''Energy landscape of mean-field spin glasses'''
The Sherrington-Kirkpatirck (SK) model is a mean-field spin glass introduced by theoretical physicists in order to explain the strange behavior of certain alloy, such as CuMn. Despite of its seemingly simple formulation, it was conjectured to possess a number of fruitful properties. This talk will be focused on the energy landscape of the SK model. First, we will present a formula for the maximal energy in Parisi's formulation. Second, we will give a description of the energy landscape by showing that near any given energy level between zero and maximal energy, there exist exponentially many equidistant spin configurations. Based on joint works with Auffinger, Handschy, and Lerman.
== Thursday, March 23, Spring Break ==
== <span style="color:red"> Wednesday, 3/29/2017, 1:00pm, </span> [http://homepages.cae.wisc.edu/~loh/index.html Po-Ling Loh], [http://www.engr.wisc.edu/department/electrical-computer-engineering/ UW-Madison] ==
<b><span style="color:red"> Please note the unusual day and time </span>
== Thursday, April 6, 2017, [http://people.maths.ox.ac.uk/woolley/ Thomas Woolley], [https://www.maths.ox.ac.uk/ Oxford] ==
== Thursday, September 8, Daniele Cappelletti, [http://www.math.wisc.edu UW-Madison] ==
Title: '''Reaction networks: comparison between deterministic and stochastic models'''
Abstract: Mathematical models for chemical reaction networks are widely used in biochemistry, as well as in other fields. The original aim of the models is to predict the dynamics of a collection of reactants that undergo chemical transformations. There exist two standard modeling regimes: a deterministic and a stochastic one. These regimes are chosen case by case in accordance to what is believed to be more appropriate. It is natural to wonder whether the dynamics of the two different models are linked, and whether properties of one model can shed light on the behavior of the other one. Some connections between the two modelling regimes have been known for forty years, and new ones have been pointed out recently. However, many open questions remain, and the issue is still largely unexplored.
== <span style="color:red"> Friday</span>, September 16, <span style="color:red"> 11 am </span> [http://www.baruch.cuny.edu/math/elenak/ Elena Kosygina], [http://www.baruch.cuny.edu/ Baruch College] and the [http://www.gc.cuny.edu/Page-Elements/Academics-Research-Centers-Initiatives/Doctoral-Programs/Mathematics CUNY Graduate Center] ==
The talk will be in Van Vleck 910 as usual.
Title: '''Homogenization of viscous Hamilton-Jacobi equations: a remark and an application.'''
Abstract: It has been pointed out in the seminal work of P.-L. Lions, G. Papanicolaou, and S.R.S. Varadhan that for the first order
Hamilton-Jacobi (HJ) equation, homogenization starting with affine initial data should imply homogenization for general uniformly
continuous initial data. The argument utilized the properties of the HJ semi-group, in particular, the finite speed of propagation. The
last property is lost for viscous HJ equations. We remark that the above mentioned implication holds under natural conditions for both
viscous and non-viscous Hamilton-Jacobi equations. As an application of our result, we show homogenization in a stationary ergodic setting for a special class of viscous HJ equations with a non-convex Hamiltonian in one space dimension.
This is a joint work with Andrea Davini, Sapienza Università di Roma.
== Thursday, September 22, [http://www.math.wisc.edu/~pmwood/ Philip Matchett Wood], [https://www.math.wisc.edu/ UW-Madison] ==
Title: '''Low-degree factors of random polynomials'''
Abstract: We study the probability that a monic polynomial with integer coefficients has a low-degree factor over the integers.
It is known that certain models are very likely to produce random polynomials that are irreducible, and our project
can be viewed as part of a general program of testing whether this is a universal behavior exhibited by many random
polynomial models. Interestingly, though the question comes from algebra and number theory, we primarily use tools
from combinatorics, including additive combinatorics, and probability theory. We prove for a variety of models that it
is very unlikely for a random polynomial with integer coefficients to have a low-degree factor—suggesting that this is, in
fact, a universal behavior. For example, we show that the characteristic polynomial of random matrix with independent
+1 or −1 entries is very unlikely to have a factor of degree up to <math>n^{1/2-\epsilon}</math>. Joint work with Sean O'Rourke. The talk will also discuss joint work with UW-Madison
undergraduates Christian Borst, Evan Boyd, Claire Brekken, and Samantha Solberg, who were supported
by NSF grant DMS-1301690 and co-supervised by Melanie Matchett Wood.
== Thursday, September 29, [http://www.artsci.uc.edu/departments/math/fac_staff.html?eid=najnudjh&thecomp=uceprof Joseph Najnudel], [http://www.artsci.uc.edu/departments/math.html University of Cincinnati]==
Title: '''On the maximum of the characteristic polynomial of the Circular Beta Ensemble'''
In this talk, we present our result on the extremal values of (the logarithm of) the characteristic polynomial of a random unitary matrix whose spectrum is distributed according to the Circular Beta Ensemble. Using different techniques, it gives an improvement and a generalization of the previous recent results by Arguin, Belius, Bourgade on the one hand, and Paquette, Zeitouni on the other hand. They recently treated the CUE case, which corresponds to beta equal to 2.
== Thursday, October 6, No Seminar ==
== Thursday, October 13, No Seminar due to [http://sites.math.northwestern.edu/mwp/ Midwest Probability Colloquium] ==
For details, see [http://sites.math.northwestern.edu/mwp/ Midwest Probability Colloquium].
== Thursday, October 20, [http://www.math.harvard.edu/people/index.html Amol Aggarwal], [http://www.math.harvard.edu/ Harvard] ==
Title: Current Fluctuations of the Stationary ASEP and Six-Vertex Model
Abstract: We consider the following three models from statistical mechanics: the asymmetric simple exclusion process, the stochastic six-vertex model, and the ferroelectric symmetric six-vertex model. It had been predicted by the physics communities for some time that the limiting behavior for these models, run under certain classes of translation-invariant (stationary) boundary data, are governed by the large-time statistics of the stationary Kardar-Parisi-Zhang (KPZ) equation. The purpose of this talk is to explain these predictions in more detail and survey some of our recent work that verifies them.
== Thursday, October 27, [http://www.math.wisc.edu/~hung/ Hung Tran], [http://www.math.wisc.edu/ UW-Madison] ==
Title: '''Homogenization of non-convex Hamilton-Jacobi equations'''
Abstract: I will describe why it is hard to do homogenization for non-convex Hamilton-Jacobi equations and explain some recent results in this direction. I will also make a very brief connection to first passage percolation and address some challenging questions which appear in both directions. This is based on joint work with Qian and Yu.
== Thursday, November 3, Alejandro deAcosta, [http://math.case.edu/ Case-Western Reserve] ==
Title: '''Large deviations for irreducible Markov chains with general state space'''
We study the large deviation principle for the empirical measure of general irreducible Markov chains in the tao topology for a broad class of initial distributions. The roles of several rate functions, including the rate function based on the convergence parameter of the transform kernel and the Donsker-Varadhan rate function, are clarified.
== Thursday, November 10, [https://sites.google.com/a/wisc.edu/louisfan/home Louis Fan], [https://www.math.wisc.edu/ UW-Madison] ==
Title: '''Particle representations for (stochastic) reaction-diffusion equations'''
Reaction diffusion equations (RDE) is a popular tool to model complex spatial-temporal patterns including Turing patterns, traveling waves and periodic switching.
These models, however, ignore the stochasticity and individuality of many complex systems in nature. Recognizing this drawback, scientists are developing individual-based models for model selection purposes. The latter models are sometimes studied under the framework of interacting particle systems (IPS) by mathematicians, who prove scaling limit theorems to connect various IPS with RDE across scales.
In this talk, I will present some new limiting objects including SPDE on metric graphs and coupled SPDE. These SPDE reduce to RDE when the noise parameter tends to zero, therefore interpolates between IPS and RDE and identifies the source of stochasticity. Scaling limit theorems and novel duality formulas are obtained for these SPDE, which not only connect phenomena across scales, but also offer insights about the genealogies and time-asymptotic properties of certain population dynamics. In particular, I will present rigorous results about the lineage dynamics for of a biased voter model introduced by Hallatschek and Nelson (2007).
== Thursday, November 24, No Seminar due to Thanksgiving ==
== Thursday, December 1, [http://math.columbia.edu/~hshen/ Hao Shen], [http://math.columbia.edu/~hshen/ Columbia] ==
Title: '''On scaling limits of Open ASEP and Glauber dynamics of ferromagnetic models'''
We discuss two scaling limit results for discrete models converging to stochastic PDEs. The first is the asymmetric simple exclusion process in contact with sources and sinks at boundaries, called Open ASEP. We prove that under weakly asymmetric scaling the height function converges to the KPZ equation with Neumann boundary conditions. The second is the Glauber dynamics of the Blume-Capel model (a generalization of Ising model), in two dimensions with Kac potential. We prove that the averaged spin field converges to the stochastic quantization equations. A common challenge in the proofs is how to identify the limiting process as the solution to the SPDE, and we will discuss how to overcome the difficulties in the two cases.(Based on joint works with Ivan Corwin and Hendrik Weber.)
== '''Colloquium''' Friday, December 2, [http://math.columbia.edu/~hshen/ Hao Shen], [http://math.columbia.edu/~hshen/ Columbia] ==
4pm, Van Vleck 9th floor
Title: '''Singular Stochastic Partial Differential Equations - How do they arise and what do they mean?'''
Abstract: Systems with random fluctuations are ubiquitous in the real world. Stochastic PDEs are default models for these random systems, just as PDEs are default models for deterministic systems. However, a large class of such stochastic PDEs were poorly understood until very recently: the presence of very singular random forcing as well as nonlinearities render it challenging to interpret what one even means by a ``solution". The recent breakthroughs by M. Hairer, M. Gubinelli and other researchers including the speaker not only established solution theories for these singular SPDEs, but also led to an explosion of new questions. These include scaling limits of random microscopic models, development of numerical schemes, ergodicity of random dynamical systems and a new approach to quantum field theory. In this talk we will discuss the main ideas of the recent solution theories of singular SPDEs, and how these SPDEs arise as limits of various important physical models.
== Thursday, January 28, [http://faculty.virginia.edu/petrov/ Leonid Petrov], [http://www.math.virginia.edu/ University of Virginia] ==
Title: '''The quantum integrable particle system on the line'''
I will discuss the higher spin six vertex model - an interacting particle
system on the discrete 1d line in the Kardar--Parisi--Zhang universality
class. Observables of this system admit explicit contour integral expressions
which degenerate to many known formulas of such type for other integrable
systems on the line in the KPZ class, including stochastic six vertex model,
ASEP, various <math>q</math>-TASEPs, and associated zero range processes. The structure
of the higher spin six vertex model (leading to contour integral formulas for
observables) is based on Cauchy summation identities for certain symmetric
rational functions, which in turn can be traced back to the sl2 Yang--Baxter
equation. This framework allows to also include space and spin inhomogeneities
into the picture, which leads to new particle systems with unusual phase
transitions.
== Thursday, February 4, [http://homepages.math.uic.edu/~nenciu/Site/Contact.html Inina Nenciu], [http://www.math.uic.edu/ UIC], Joint Probability and Analysis Seminar ==
Title: '''On some concrete criteria for quantum and stochastic confinement'''
Abstract: In this talk we will present several recent results on criteria ensuring the confinement of a quantum or a stochastic particle to a bounded domain in <math>\mathbb{R}^n</math>. These criteria are given in terms of explicit growth and/or decay rates for the diffusion matrix and the drift potential close to the boundary of the domain. As an application of the general method, we will discuss several cases, including some where the background Riemannian manifold (induced by the diffusion matrix) is geodesically incomplete. These results are part of an ongoing joint project with G. Nenciu (IMAR, Bucharest, Romania).
== <span style="color:green">Friday, February 5</span>, [http://www.math.ku.dk/~d.cappelletti/index.html Daniele Cappelletti], [http://www.math.ku.dk/ Copenhagen University], speaks in the [http://www.math.wisc.edu/wiki/index.php/Applied/ACMS Applied Math Seminar], <span style="color:green">2:25pm in Room 901 </span>==
'''Note:''' Daniele Cappelletti is speaking in the [http://www.math.wisc.edu/wiki/index.php/Applied/ACMS Applied Math Seminar], but his research on stochastic reaction networks uses probability theory and is related to work of our own [http://www.math.wisc.edu/~anderson/ David Anderson].
Title: '''Deterministic and Stochastic Reaction Networks'''
Abstract: Mathematical models of biochemical reaction networks are of great interest for the analysis of experimental data and theoretical biochemistry. Moreover, such models can be applied in a broader framework than that provided by biology. The classical deterministic model of a reaction network is a system of ordinary differential equations, and the standard stochastic model is a continuous-time Markov chain. A relationship between the dynamics of the two models can be found for compact time intervals, while the asymptotic behaviours of the two models may differ greatly. I will give an overview of these problems and show some recent development.
Whether bi-infinite geodesics exist has been a significant open problem in first- and last-passage percolation since the mid-80s. A non-existence proof in the case of directed planar last-passage percolation with exponential weights was posted by Basu, Hoffman and Sly in November 2018. Their proof utilizes estimates from integrable probability. This talk describes an independent proof completed 10 months later that relies on couplings, coarse graining, and control of geodesics through planarity and increment-stationary last-passage percolation. Joint work with Marton Balazs and Ofer Busani (Bristol).
== Thursday, February 25, [http://www.princeton.edu/~rvan/ Ramon van Handel], [http://orfe.princeton.edu/ ORFE] and [http://www.pacm.princeton.edu/ PACM, Princeton] ==
== January 30, 2020, [https://www.math.wisc.edu/people/vv-prof-directory Scott Smith] (UW Madison) ==
'''Quasi-linear parabolic equations with singular forcing'''
Title: '''The norm of structured random matrices'''
The classical solution theory for stochastic ODE's is centered around Ito's stochastic integral. By intertwining ideas from analysis and probability, this approach extends to many PDE's, a canonical example being multiplicative stochastic heat equations driven by space-time white noise. In both the ODE and PDE settings, the solution theory is beyond the scope of classical deterministic theory because of the ambiguity in multiplying a function with a white noise. The theory of rough paths and regularity structures provides a more quantitative understanding of this difficulty, leading to a more refined solution theory which efficiently divides the analytic and probabilistic aspects of the problem, and remarkably, even has an algebraic component.
Abstract: Understanding the spectral norm of random matrices is a problem
In this talk, we will discuss a new application of these ideas to stochastic heat equations where the strength of the diffusion is not constant but random, as it depends locally on the solution. These are known as quasi-linear equations. Our main result yields the deterministic side of a solution theory for these PDE's, modulo a suitable renormalization. Along the way, we identify a formally infinite series expansion of the solution which guides our analysis, reveals a nice algebraic structure, and encodes the counter-terms in the PDE. This is joint work with Felix Otto, Jonas Sauer, and Hendrik Weber.
of basic interest in several areas of pure and applied mathematics. While
the spectral norm of classical random matrix models is well understood,
existing methods almost always fail to be sharp in the presence of
nontrivial structure. In this talk, I will discuss new bounds on the norm
of random matrices with independent entries that are sharp under mild
conditions. These bounds shed significant light on the nature of the
problem, and make it possible to easily address otherwise nontrivial
phenomena such as the phase transition of the spectral edge of random band
matrices. I will also discuss some conjectures whose resolution would
complete our understanding of the underlying probabilistic mechanisms.
== Thursday, March 3, [http://www.math.wisc.edu/~janjigia/ Chris Janjigian], [http://www.math.wisc.edu/ UW-Madison] ==
== February 6, 2020, [https://sites.google.com/site/cyleeken/ Cheuk-Yin Lee] (Michigan State) ==
'''Sample path properties of stochastic partial differential equations: modulus of continuity and multiple points'''
Title: '''Large deviations for certain inhomogeneous corner growth models'''
In this talk, we will discuss sample path properties of stochastic partial differential equations (SPDEs). We will present a sharp regularity result for the stochastic wave equation driven by an additive Gaussian noise that is white in time and colored in space. We prove the exact modulus of continuity via the property of local nondeterminism. We will also discuss the existence problem for multiple points (or self-intersections) of the sample paths of SPDEs. Our result shows that multiple points do not exist in the critical dimension for a large class of Gaussian random fields including the solution of a linear system of stochastic heat or wave equations.
== February 13, 2020, [http://www.jelena-diakonikolas.com/ Jelena Diakonikolas] (UW Madison) ==
The corner growth model is a classical model of growth in the plane and is connected to other familiar models such as directed last passage percolation and the TASEP through various geometric maps. In the case that the waiting times are i.i.d. with exponential or geometric marginals, the model is well understood: the shape function can be computed exactly, the fluctuations around the shape function are known to be given by the Tracy-Widom GUE distribution, and large deviation principles corresponding to this limit have been derived.
'''Langevin Monte Carlo Without Smoothness'''
This talk considers the large deviation properties of a generalization of the classical model in which the rates of the exponential are drawn randomly in an appropriate way. We will discuss some exact computations of rate functions in the quenched and annealed versions of the model, along with some interesting properties of large deviations in this model. (Based on joint work with Elnur Emrah.)
Langevin Monte Carlo (LMC) is an iterative algorithm used to generate samples from a distribution that is known only up to a normalizing constant. The nonasymptotic dependence of its mixing time on the dimension and target accuracy is understood mainly in the setting of smooth (gradient-Lipschitz) log-densities, a serious limitation for applications in machine learning. We remove this limitation by providing polynomial-time convergence guarantees for a variant of LMC in the setting of non-smooth log-concave distributions. At a high level, our results follow by leveraging the implicit smoothing of the log-density that comes from a small Gaussian perturbation that we add to the iterates of the algorithm and while controlling the bias and variance that are induced by this perturbation.
Based on joint work with Niladri Chatterji, Michael I. Jordan, and Peter L. Bartlett.
== Thursday, March 10, [http://www.math.wisc.edu/~jyin/jun-yin.html Jun Yin], [http://www.math.wisc.edu/ UW-Madison] ==
== February 20, 2020, [https://math.berkeley.edu/~pmwood/ Philip Matchett Wood] (UC Berkeley) ==
'''A replacement principle for perturbations of non-normal matrices'''
Title: '''Delocalization and Universality of band matrices.'''
There are certain non-normal matrices whose eigenvalues can change dramatically when a small perturbation is added. However, when that perturbation is an iid random matrix, it appears that the eigenvalues become stable after perturbation and only change slightly when further small perturbations are added. Much of the work is this situation has focused on iid random gaussian perturbations. In this talk, we will discuss work on a universality result that allows for consideration of non-gaussian perturbations, and that shows that all perturbations satisfying certain conditions will produce the same limiting eigenvalue measure. Interestingly, this even allows for deterministic perturbations to be considered. Joint work with Sean O'Rourke.
Abstract: in this talk we introduce our new work on band matrices, whose eigenvectors and eigenvalues are widely believed to have the same asymptotic behaviors as those of Wigner matrices.
== February 27, 2020, No seminar ==
We proved that this conjecture is true as long as the bandwidth is wide enough.
''' '''
== Thursday, March 17, [http://www.math.wisc.edu/~roch/ Sebastien Roch], [http://www.math.wisc.edu/ UW-Madison] ==
== March 5, 2020, [https://www.ias.edu/scholars/jiaoyang-huang Jiaoyang Huang] (IAS) ==
''' Large Deviation Principles via Spherical Integrals'''
In this talk, I'll explain a framework to study the large deviation principle for matrix models and their quantized versions, by tilting the measures using the asymptotics of spherical integrals obtained by Guionnet and Zeitouni. As examples, we obtain
Title: '''Recovering the Treelike Trend of Evolution Despite Extensive Lateral Genetic Transfer'''
1) the large deviation principle for the empirical distribution of the diagonal entries of $UB_NU^*$, for a sequence of $N\times N$ diagonal matrices $B_N$ and unitary/orthogonal Haar distributed matrices $U$;
2) the large deviation upper bound for the empirical eigenvalue distribution of $A_N+UB_NU^*$, for two sequences of $N\times N$ diagonal matrices $A_N, B_N$, and their complementary lower bounds at "good" probability distributions;
Reconstructing the tree of life from molecular sequences is a fundamental problem in computational
biology. Modern data sets often contain large numbers of genes. That can complicate the reconstruction because different genes often undergo different evolutionary histories. This is the case in particular in the presence of lateral genetic transfer (LGT), where a gene is inherited from a distant species rather than an immediate ancestor. Such an event produces a gene tree which is distinct from (but related to) the species phylogeny. In this talk I will sketch recent results showing that, under a natural stochastic model of LGT, the species phylogeny can be reconstructed from gene trees despite surprisingly high rates of LGT.
== Thursday, March 24, No Seminar, Spring Break ==
3) the large deviation principle for the Kostka number $K_{\lambda_N \eta_N}$, for two sequences of partitions $\lambda_N, \eta_N$ with at most $N$ rows;
== Thursday, March 31, [http://www.ssc.wisc.edu/~whs/ Bill Sandholm], [http://www.econ.wisc.edu/ Economics, UW-Madison] ==
4) the large deviation upper bound for the Littlewood-Richardson coefficients $c_{\lambda_N \eta_N}^{\kappa_N}$, for three sequences of partitions $\lambda_N, \eta_N, \kappa_N$ with at most $N$ rows, and their complementary lower bounds at "good" probability distributions.
Title: '''A Sample Path Large Deviation Principle for a Class of Population Processes'''
This is a joint work with Belinschi and Guionnet.
Abstract: We establish a sample path large deviation principle for sequences of Markov chains arising in game theory and other applications. As the state spaces of these Markov chains are discrete grids in the simplex, our analysis must account for the fact that the processes run on a set with a boundary. A key step in the analysis establishes joint continuity properties of the state-dependent Cramer transform L(·,·), the running cost appearing in the large deviation principle rate function.
== March 12, 2020, No seminar ==
[http://www.ssc.wisc.edu/~whs/research/ldp.pdf paper preprint]
== March 19, 2020, Spring break ==
== Thursday, April 7, No Seminar ==
== March 26, 2020, CANCELLED, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==
== Thursday, April 14, [https://www.math.wisc.edu/~jessica/ Jessica Lin], [https://www.math.wisc.edu/~jessica/ UW-Madison], Joint with [https://www.math.wisc.edu/wiki/index.php/PDE_Geometric_Analysis_seminar PDE Geometric Analysis seminar] ==
== April 2, 2020, CANCELLED, [http://pages.cs.wisc.edu/~tl/ Tianyu Liu] (UW Madison)==
Title: '''Optimal Quantitative Error Estimates in Stochastic Homogenization for Elliptic Equations in Nondivergence Form'''
== April 9, 2020, CANCELLED, [http://stanford.edu/~ajdunl2/ Alexander Dunlap] (Stanford) ==
Abstract: I will present optimal quantitative error estimates in the
== April 16, 2020, CANCELLED, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==
stochastic homogenization for uniformly elliptic equations in
nondivergence form. From the point of view of probability theory,
stochastic homogenization is equivalent to identifying a quenched
invariance principle for random walks in a balanced random
environment. Under strong independence assumptions on the environment,
the main argument relies on establishing an exponential version of the
Efron-Stein inequality. As an artifact of the optimal error estimates,
we obtain a regularity theory down to microscopic scale, which implies
estimates on the local integrability of the invariant measure
associated to the process. This talk is based on joint work with Scott
Armstrong.
== Thursday, April 21, [http://www.cims.nyu.edu/~bourgade/ Paul Bourgade], [https://www.cims.nyu.edu/ Courant Institute, NYU] ==
== April 22-24, 2020, CANCELLED, [http://frg.int-prob.org/ FRG Integrable Probability] meeting ==
Title: '''Freezing and extremes of random unitary matrices'''
3-day event in Van Vleck 911
Abstract: A conjecture of Fyodorov, Hiary & Keating states that the maxima of the characteristic polynomial of random unitary matrices behave like the maxima of a specific class of Gaussian fields, the log-correlated Gaussian fields. We will outline the proof of the conjecture for the leading order of the maximum, and a freezing of the free energy related to the matrix model. This talk is based on a joint work with Louis-Pierre Arguin and David Belius.
== April 23, 2020, CANCELLED, [http://www.hairer.org/ Martin Hairer] (Imperial College) ==
== Thursday, April 28, [http://www.ime.unicamp.br/~nancy/ Nancy Garcia], [http://www.ime.unicamp.br/conteudo/departamento-estatistica Statistics], [http://www.ime.unicamp.br/ IMECC], [http://www.unicamp.br/unicamp/ UNICAMP, Brazil] ==
[https://www.math.wisc.edu/wiki/index.php/Colloquia Wolfgang Wasow Lecture] at 4pm in Van Vleck 911
Title: '''Rumor processes on <math>\mathbb{N}</math> and discrete renewal processe'''
== April 30, 2020, CANCELLED, [http://willperkins.org/ Will Perkins] (University of Illinois at Chicago) ==
Abstract: We study two rumor processes on the positive integers, the dynamics of which are related to an SI epidemic model with long range transmission. Start with one spreader at site <math>0</math> and ignorants situated at some other sites of <math>\mathbb{N}</math>. The spreaders transmit the information within a random distance on their right. Depending on the initial distribution of the ignorants, we obtain probability of survival, information on the distribution of the range of the rumor and limit theorems for the proportion of spreaders. The key step of our approach is to relate this model to the house-of-cards.
== Thursday, May 5, [http://math.arizona.edu/~dianeholcomb/ Diane Holcomb], [http://math.arizona.edu/ University of Arizona] ==
Title: '''Local limits of Dyson's Brownian Motion at multiple times'''
Abstract: Dyson's Brownian Motion may be thought of as a generalization of Brownian Motion to the matrix setting. We can study the eigenvalues of a Dyson's Brownian motion at multiple times. The resulting object has different "color" points corresponding to the eigenvalues at different times. Similar to a single time, the correlation functions of the process may be described in terms of determinantal formulas. We study the local behavior of the eigenvalues as we take the dimension of the associated matrix to infinity. The resulting limiting process in the bulk is again determinantal and is described with an "extended sine kernel." This work aims to give an alternate description of the limiting process in terms of the counting function. In this seminar I will go over the the description and methods for finding such a limit. This is work in progress and is joint with Elliot Paquette (Weizmann Institute).
== ==
[[Past Seminars]]
Thursdays in 901 Van Vleck Hall at 2:30 PM, unless otherwise noted. We usually end for questions at 3:20 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to [email protected]
January 23, 2020, Timo Seppalainen (UW Madison)
Non-existence of bi-infinite geodesics in the exponential corner growth model
January 30, 2020, Scott Smith (UW Madison)
Quasi-linear parabolic equations with singular forcing
February 6, 2020, Cheuk-Yin Lee (Michigan State)
Sample path properties of stochastic partial differential equations: modulus of continuity and multiple points
February 13, 2020, Jelena Diakonikolas (UW Madison)
Langevin Monte Carlo Without Smoothness
Langevin Monte Carlo (LMC) is an iterative algorithm used to generate samples from a distribution that is known only up to a normalizing constant. The nonasymptotic dependence of its mixing time on the dimension and target accuracy is understood mainly in the setting of smooth (gradient-Lipschitz) log-densities, a serious limitation for applications in machine learning. We remove this limitation by providing polynomial-time convergence guarantees for a variant of LMC in the setting of non-smooth log-concave distributions. At a high level, our results follow by leveraging the implicit smoothing of the log-density that comes from a small Gaussian perturbation that we add to the iterates of the algorithm and while controlling the bias and variance that are induced by this perturbation. Based on joint work with Niladri Chatterji, Michael I. Jordan, and Peter L. Bartlett.
February 20, 2020, Philip Matchett Wood (UC Berkeley)
A replacement principle for perturbations of non-normal matrices
February 27, 2020, No seminar
March 5, 2020, Jiaoyang Huang (IAS)
Large Deviation Principles via Spherical Integrals
March 12, 2020, No seminar
March 19, 2020, Spring break
March 26, 2020, CANCELLED, Philippe Sosoe (Cornell)
April 2, 2020, CANCELLED, Tianyu Liu (UW Madison)
April 9, 2020, CANCELLED, Alexander Dunlap (Stanford)
April 16, 2020, CANCELLED, Jian Ding (University of Pennsylvania)
April 22-24, 2020, CANCELLED, FRG Integrable Probability meeting
April 23, 2020, CANCELLED, Martin Hairer (Imperial College)
Wolfgang Wasow Lecture at 4pm in Van Vleck 911
April 30, 2020, CANCELLED, Will Perkins (University of Illinois at Chicago)
Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Past_Probability_Seminars_Spring_2020&oldid=19533" | CommonCrawl |
How is a single qubit fundamentally different from a classical coin spinning in the air?
I had asked this question earlier in the comment section of the post: What is a qubit? but none of the answers there seem to address it at a satisfactory level.
The question basically is:
How is a single qubit in a Bell state $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ any different from a classical coin spinning in the air (on being tossed)?
The one-word answer for difference between a system of 2 qubits and a system of 2 classical coins is "entanglement". For instance, you cannot have a system of two coins in the state $\frac{1}{\sqrt 2}|00\rangle+\frac{1}{\sqrt 2}|11\rangle$. The reason is simple: when two "fair" coins are spinning in air, there is always some finite probability that the first coin lands heads-up while the second coin lands tails-up, and also the vice versa is true. In the combined Bell state $\frac{1}{\sqrt 2}|00\rangle+\frac{1}{\sqrt 2}|11\rangle$ that is not possible. If the first qubit turns out to be $|0\rangle$, the second qubit will necessarily be $|1\rangle$. Similarly, if the first qubit turns out to be $|1\rangle$, the second qubit will necessarily turn out to be $|1\rangle$. At, this point someone might point out that if we use $2$ "biased" coins then it might be possible to recreate the combined Bell state. The answer is still no (it's possible to mathematically prove it...try it yourself!). That's because the Bell state cannot be decomposed into a tensor product of two individual qubit states i.e. the two qubits are entangled.
While the reasoning for the 2-qubit case is understandable from there, I'm not sure what fundamental reason distinguishes a single qubit from a single "fair" coin spinning in the air.
This answer by @Jay Gambetta somewhat gets at it (but is still not satisfactory):
This is a good question and in my view gets at the heart of a qubit. Like the comment by @blue, it's not that it can be an equal superposition as this is the same as a classical probability distribution. It is that it can have negative signs.
Take this example. Imagine you have a bit in the $0$ state and then you apply a coin flipping operation by some stochastic matrix $\begin{bmatrix}0.5 & 0.5 \\0.5 & 0.5 \end{bmatrix}$ this will make a classical mixture. If you apply this twice it will still be a classical mixture.
Now lets got to the quantum case and start with a qubit in the $0$ state and apply a coin flipping operation by some unitary matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$. This makes an equal superposition and you get random outcomes like above. Now applying this twice you get back the state you started with. The negative sign cancels due to interference which cannot be explained by probability theory.
Extending this to n qubits gives you a theory that has an exponential that we can't find efficient ways to simulate.
This is not just my view. I have seen it shown in talks of Scott Aaronson and I think its best to say quantum is like "Probability theory with Minus Signs" (this is a quote I seen Scott make).
I'm not exactly sure how they're getting the unitary matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$ and what the motivation behind that is. Also, they say: "The negative sign cancels due to interference which can not be explained by probability theory." The way they've used the word interference seems very vague to me. It would be useful if someone can elaborate on the logic used in that answer and explain what they actually mean by interference and why exactly it cannot be explained by classical probability. Is it some extension of Bell's inequality for 1-qubit systems (doesn't seem so based on my conversations with the folks in the main chat though)?
physical-qubit quantum-state
Sanchayan Dutta
Sanchayan DuttaSanchayan Dutta
How is a single qubit in a Bell state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ any different from a classical coin spinning in the air (on being tossed)?
For both of them, the probability of getting heads is 1/2 and getting tails is also 1/2 (we can assume that heads$\equiv|1\rangle$ and tails$\equiv|0\rangle$ and that we are "measuring" in the heads-tails basis).
For any 1-qubit state $|\psi\rangle$, if all you do is measure it in the computational basis, you will always be able to explain it in terms of a probability distribution p(heads)$=|\langle 0|\psi\rangle|^2$ and p(tails)$=|\langle 1|\psi\rangle|^2$. The key differences are in using different bases and/or performing unitary evolutions.
The classic example is the Mach-Zehnder interferometer. Think of it this way: any 1-bit probabilistic operation is described by a $2\times 2$ stochastic matrix (i.e. all columns sum to 1). Call it $P$. It is easy enough to show that there is no $P$ such that $P^2=X$, where $X$ is the Pauli matrix (in other words, a NOT gate). Thus, there is no probabilistic gate that can be considered the square-root of NOT. On the other hand, we can build such a device. A half-silvered mirror performs the square-root of not action.
A half-silvered mirror has two inputs (labelled 0 and 1) and two outputs (also labelled 0 and 1). Each input is a photon coming in a different direction, and it is either reflected or transmitted. If you just look at one half-silvered mirror, then whatever input you give, the output is 50:50 reflected or transmitted. It seems just like the coin you're talking about. However, if you put two of them together, if you input 0, or always get the output 1, and vice versa. The only way to explain this is with probability amplitudes, and a transition matrix that looks like $$ U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & i \\ i & 1 \end{array}\right). $$ In quantum mechanics, the square-root of not gate exists.
$\begingroup$ It's hard to pin down a why. I'm just using it to show very explicitly that there is a difference. And, more to the point, to show that classical is insufficient to describe what actually happens in the experiment. So, you need a broader formalism. The idea of probability amplitudes gives you that broader formalism. It's like if you restrict to real numbers only, the square root is not well defined, because you need complex numbers to be able to explain it. That's basically what we're doing here. $\endgroup$ – DaftWullie Jun 25 '18 at 8:00
$\begingroup$ I get the transition matrix by saying that for one use, you get 50:50 outputs, so all 4 matrix elements must have a mod-square equal to 1/2. What $2\times 2$ complex matrices $U$ are there, satisfying that constraint, such that $U\cdot U=X$? Any answer will do. $\endgroup$ – DaftWullie Jun 25 '18 at 8:02
$\begingroup$ What do you mean by "fundamental"? Mathematically, it's because we have to describe quantum mechanics using a richer mathematical structure than classical (as proven by this square root of not, not that this gate is particularly special: you can replace the X by any stochastic matrix with a negative eigenvalue). In terms of physics, well physics is just the working theory that describes experimental outcomes (such as square root of not). In terms of some underlying explanation of why the world is the way that it is, who knows? $\endgroup$ – DaftWullie Jun 25 '18 at 8:27
$\begingroup$ You might also be interested in the Kochen-Specker Theorem. It only applies to qutrits and higher, but may help to cover what you want. $\endgroup$ – DaftWullie Jun 25 '18 at 8:31
$\begingroup$ The main problem in this answer is that, although the transition matrix in the case you mentioned i.e. $$U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & i \\ i & 1 \end{array}\right)$$ doesn't occur in the classical case, it doesn't mean the effects of the transition cannot be replicated for a classical coin. After all, the transition matrix cannot be measured directly. All we can measure is the outcome probabilities! $\endgroup$ – Sanchayan Dutta Jun 26 '18 at 17:42
The analogy between qubits and coin flips is popular but can be misleading. (See, for example, this video: https://www.youtube.com/watch?v=lypnkNm0B4A) A coin spinning in the air and landing on the ground is not truly random, though we may describe it as such. The key point is how you measure it.
At any point in time the coin has a definite orientation, though it may be unknown to us. Likewise, qubits have a definite state at any time, which we can describe by a point on the surface of a sphere (the so-called Bloch sphere). Mathematically, a coin's orientation and a qubit's state are equivalent. While in the air, the coin may undergo deterministic and reversible motion (e.g., spinning and falling). Likewise, prior to measurement a qubit may undergo deterministic and reversible transformations (e.g., unitary gate operations on a quantum computer).
Measurement represents an irreversible process. For a coin, it is a series of inelastic collisions with the ground, bouncing and spinning until it comes to rest. If we are completely ignorant of the initial conditions of the coin, the two final orientations (heads or tails) will appear equally likely, but this is not always the case. If I drop it oriented "heads up" from a short height, it will land flat with "heads up" with near certainty. But suppose I was standing next to a large magnetic wall and did this. The coin would hit edge-on and would likely land with either heads or tails showing, with equal probability. One could imagine doing this experiment with various initial orientations of the coin and orientations of the magnetic wall (upright, flat, slanted, etc.). You can imagine that the probability of getting heads or tails will be different, depending on the relative orientations of the coin and wall. (In theory it's all completely deterministic, but in practice we never know the initial conditions that precisely.)
Measurements of qubits are quite similar. I can prepare a qubit in the state $\frac{1}{\sqrt{2}}[|0\rangle + |1\rangle]$, measure it in the 0/1 basis $\{|0\rangle, |1\rangle\}$, and get either $|0\rangle$ or $|1\rangle$ with equal probability. If, however, I measure in the +/- basis $\{|+\rangle, |-\rangle\}$ (analogous to using a magnetic wall), I get $|+\rangle$ with near certainty. (I say "near certainty" because, well, nothing in the real world is perfect.) Here, $|\pm\rangle = \frac{1}{\sqrt{2}}[|0\rangle\pm|1\rangle$ are the +/- basis states. For polarized photons, for example, this could be done used polarization filters rotated $45^\circ$.
The difference between preparing the state $\frac{1}{\sqrt{2}}[|0\rangle + |1\rangle]$ and the state $\frac{1}{\sqrt{2}}[|0\rangle - |1\rangle]$ is the difference between preparing a vertically oriented coin with either heads or tails facing away from the wall. (A good picture would really help here.) We can tell which of the two states is prepared based on the outcome of a suitably chosen measurement, which in this case would be a +/- basis (or magnetic wall) measurement.
Jay Gambetta mentions a unitary matrix that is used to represent a Hadamard gate. It corresponds to rotating a coin by $90^\circ$, so a coin that's initially heads up becomes vertically oriented with, say, heads facing away from the wall. If the wall is magnetic and you release the coin, it will stick to it with heads up. If, instead, you started with a coin that's tails up and applied the same rotation, it would be vertical with tails facing away from the wall. If you release it (and the wall is still magnetic), you get tails. On the other hand, if the wall is not magnetic and you drop it, it lands heads or tails with equal probability. Using a "floor" measurement doesn't distinguish between the two vertical orientations, but using a "wall" measurement does. It's not so much whether things are predictable or not, it's the type of measurement you do that distinguishes one quantum state (or coin orientation) from another.
This is the whole of it. The only remaining mystery is that the outcome of the coin measurement is considered to be, in theory, completely deterministic, while that of the qubit is considered to be, except in special cases, "intrinsically random." But that's another discussion...
Brian R. La CourBrian R. La Cour
$\begingroup$ Could you also add an explanation for Jay Gambetta's approach using transition matrix (which he/she apparently justifies using quantum interference)? $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 15:32
$\begingroup$ And to summarize, your main point is that while for a coin if we know the initial conditions sufficiently precisely, then the outcome of a measurement is completely predictable. But for a qubit, simply knowing the initial state sufficiently precisely isn't sufficient to predict the outcome of a measurement (which is essentially what the Copenhagen interpretation says). Yes? $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 15:41
$\begingroup$ @Blue, please see my recent edits for an answer to your question. $\endgroup$ – Brian R. La Cour Jun 24 '18 at 17:04
You have already mentioned the practical differences, such as qubit entanglement, and the negative signs (or more general "phases").
The fundamental reason for this is that allowed quantum states are solutions of the Schrödinger equation, which is a linear differential equation. The sum of solutions to a linear differential equation is always also a solution to that differential equation [1],[2],[3]. Since "solution" to differential equation is synonymous with "allowed quantum state" or "allowed wavefunction", any sum of allowed states is also allowed (ie. superpositions like Bell states are allowed).
That is the fundamental reason why quantum mechanical bits (qubits) can exist in superpositions. In fact, not just any sum, but any linear combination of states is an allowed state because the differential equation is linear. This means we can even add constants (phases of -1 or +1 or $e^{i\theta}$) and still have allowed states.
Bits that follow the rules of quantum physics, for example, the Schrödinger equation, can physically exist in superpositions and with phases, due to linearity (review vector spaces if this is not clear). Classical physics does not give any mechanism for a system to be in more than one state at the same time.
$\begingroup$ @Blue: I saw you had a conversation with someone where that person mentioned the need for "boundary conditions". It is not really true, qubits can exist in superposition and with phases because of linearity of the equation describing them. I have given 3 links which prove this fact in many ways. $\endgroup$ – user1271772 Jun 24 '18 at 18:01
$\begingroup$ Boundary conditions are what ensure that the states are discrete. The Schrodinger equation by itself cannot posit that. Moreover, the superposition that you speak of is certainly not something intrinsic to quantum mechanics. For instance, a classical system can be in a harmonic motion which is a superposition of two individual harmonic motions (basis motions), satisfying a certain ODE. $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 20:23
$\begingroup$ "Classical physics does not give any mechanism for a system to be in more than one state at the same time." <--- being in more than one state at the same instant $\neq$ being in a superposition of two basis states. $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 20:25
Not the answer you're looking for? Browse other questions tagged physical-qubit quantum-state or ask your own question.
What is a qubit?
What is the difference between superpositions and mixed states?
What are the P(0) and P(1) probabilities for the T transformation in quantum computing?
How to translate matrix back into Dirac notation?
Optimal strategy to a quantum state game
State of a system after the second qubit of a Bell state sent through a bit flip error channel
What happens if I measure only the first qubit of a Bell state?
Help in understanding an exercise on observable / measurement
What's the difference between observing in a given direction and operating in that same direction?
Probability of measuring the first qubit in the state $\frac{1}{\sqrt 2}(|0⟩+|1⟩)$ in a two-qubit state | CommonCrawl |
Cook–Levin theorem
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
The theorem is named after Stephen Cook and Leonid Levin.
An important consequence of this theorem is that if there exists a deterministic polynomial-time algorithm for solving Boolean satisfiability, then every NP problem can be solved by a deterministic polynomial-time algorithm. The question of whether such an algorithm for Boolean satisfiability exists is thus equivalent to the P versus NP problem, which is still widely considered the most important unsolved problem in theoretical computer science as of 2023.
Contributions
The concept of NP-completeness was developed in the late 1960s and early 1970s in parallel by researchers in North America and the USSR. In 1971, Stephen Cook published his paper "The complexity of theorem proving procedures"[1] in conference proceedings of the newly founded ACM Symposium on Theory of Computing. Richard Karp's subsequent paper, "Reducibility among combinatorial problems",[2] generated renewed interest in Cook's paper by providing a list of 21 NP-complete problems. Cook and Karp each received a Turing Award for this work.
The theoretical interest in NP-completeness was also enhanced by the work of Theodore P. Baker, John Gill, and Robert Solovay who showed, in 1975, that solving NP-problems in oracle machine models requires exponential time. That is, there exists an oracle A such that, for all subexponential deterministic-time complexity classes T, the relativized complexity class NPA is not a subset of TA. In particular, for this oracle, PA ≠ NPA.[3]
In the USSR, a result equivalent to Baker, Gill, and Solovay's was published in 1969 by M. Dekhtiar.[4] Later Leonid Levin's paper, "Universal search problems",[5] was published in 1973, although it was mentioned in talks and submitted for publication a few years earlier.
Levin's approach was slightly different from Cook's and Karp's in that he considered search problems, which require finding solutions rather than simply determining existence. He provided six such NP-complete search problems, or universal problems. Additionally he found for each of these problems an algorithm that solves it in optimal time (in particular, these algorithms run in polynomial time if and only if P = NP).
Definitions
A decision problem is in NP if it can be decided by a non-deterministic Turing machine in polynomial time.
An instance of the Boolean satisfiability problem is a Boolean expression that combines Boolean variables using Boolean operators. Such an expression is satisfiable if there is some assignment of truth values to the variables that makes the entire expression true.
Idea
Given any decision problem in NP, construct a non-deterministic machine that solves it in polynomial time. Then for each input to that machine, build a Boolean expression that computes whether when that specific input is passed to the machine, the machine runs correctly, and the machine halts and answers "yes". Then the expression can be satisfied if and only if there is a way for the machine to run correctly and answer "yes", so the satisfiability of the constructed expression is equivalent to asking whether or not the machine will answer "yes".
Proof
This proof is based on the one given by Garey and Johnson.[6]
There are two parts to proving that the Boolean satisfiability problem (SAT) is NP-complete. One is to show that SAT is an NP problem. The other is to show that every NP problem can be reduced to an instance of a SAT problem by a polynomial-time many-one reduction.
SAT is in NP because any assignment of Boolean values to Boolean variables that is claimed to satisfy the given expression can be verified in polynomial time by a deterministic Turing machine. (The statements verifiable in polynomial time by a deterministic Turing machine and solvable in polynomial time by a non-deterministic Turing machine are equivalent, and the proof can be found in many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3., as well as in the Wikipedia article on NP).
Now suppose that a given problem in NP can be solved by the nondeterministic Turing machine $M=(Q,\Sigma ,s,F,\delta )$, where $Q$ is the set of states, $\Sigma $ is the alphabet of tape symbols, $s\in Q$ is the initial state, $F\subseteq Q$ is the set of accepting states, and $\delta \subseteq ((Q\setminus F)\times \Sigma )\times (Q\times \Sigma \times \{-1,+1\})$ is the transition relation. Suppose further that $M$ accepts or rejects an instance of the problem after at most $p(n)$ computation steps, where $n$ is the size of the instance and $p$ is a polynomial function.
For each input, $I$, specify a Boolean expression $B$ that is satisfiable if and only if the machine $M$ accepts $I$.
The Boolean expression uses the variables set out in the following table. Here, $q\in Q$ is a machine state, $-p(n)\leq i\leq p(n)$ is a tape position, $j\in \Sigma $ is a tape symbol, and $0\leq k\leq p(n)$ is the number of a computation step.
Variables Intended interpretation How many?[7]
$T_{i,j,k}$ True if tape cell $i$ contains symbol $j$ at step $k$ of the computation. $O(p(n)^{2})$
$H_{i,k}$ True if $M$'s read/write head is at tape cell $i$ at step $k$ of the computation. $O(p(n)^{2})$
$Q_{q,k}$ True if $M$ is in state $q$ at step $k$ of the computation. $O(p(n))$
Define the Boolean expression $B$ to be the conjunction of the sub-expressions in the following table, for all $-p(n)\leq i\leq p(n)$ and $0\leq k\leq p(n)$:
Expression Conditions Interpretation How many?
$T_{i,j,0}$ Tape cell $i$ initially contains symbol $j$ Initial contents of the tape. For $i>n-1$ and $i<0$, outside of the actual input $I$, the initial symbol is the special default/blank symbol. $O(p(n))$
$Q_{s,0}$ Initial state of $M$. 1
$H_{0,0}$ Initial position of read/write head. 1
$\neg T_{i,j,k}\lor \neg T_{i,j',k}$ $j\neq j'$ At most one symbol per tape cell. $O(p(n)^{2})$
$\bigvee _{j\in \Sigma }T_{i,j,k}$ At least one symbol per tape cell. $O(p(n)^{2})$
$T_{i,j,k}\land T_{i,j',k+1}\rightarrow H_{i,k}$ $j\neq j'$ Tape remains unchanged unless written by head. $O(p(n)^{2})$
$\lnot Q_{q,k}\lor \lnot Q_{q',k}$ $q\neq q'$ Only one state at a time. $O(p(n))$
$\lnot H_{i,k}\lor \lnot H_{i',k}$ $i\neq i'$ Only one head position at a time. $O(p(n)^{3})$
${\begin{array}{l}(H_{i,k}\land Q_{q,k}\land T_{i,\sigma ,k})\to \\\bigvee _{((q,\sigma ),(q',\sigma ',d))\in \delta }(H_{i+d,\ k+1}\land Q_{q',\ k+1}\land T_{i,\ \sigma ',\ k+1})\end{array}}$ $k<p(n)$ Possible transitions at computation step $k$ when head is at position $i$. $O(p(n)^{2})$
$\bigvee _{0\leq k\leq p(n)}\bigvee _{f\in F}Q_{f,k}$ Must finish in an accepting state, not later than in step $p(n)$. 1
If there is an accepting computation for $M$ on input $I$, then $B$ is satisfiable by assigning $T_{i,j,k}$, $H_{i,k}$ and $Q_{i,k}$ their intended interpretations. On the other hand, if $B$ is satisfiable, then there is an accepting computation for $M$ on input $I$ that follows the steps indicated by the assignments to the variables.
There are $O(p(n)^{2})$ Boolean variables, each encodeable in space $O(logp(n))$. The number of clauses is $O(p(n)^{3})$[8] so the size of $B$ is $O(log(p(n))p(n)^{3})$. Thus the transformation is certainly a polynomial-time many-one reduction, as required.
Only the first table row ($T_{i,j,0}$) actually depends on the input string $I$. The remaining lines depend only on the input length $n$ and on the machine $M$; they formalize a generic computation of $M$ for up to $p(n)$ steps.
The transformation makes extensive use of the polynomial $p(n)$. As a consequence, the above proof is not constructive: even if $M$ is known, witnessing the membership of the given problem in NP, the transformation cannot be effectively computed, unless an upper bound $p(n)$ of $M$'s time complexity is also known.
Complexity
While the above method encodes a non-deterministic Turing machine in complexity $O(\log(p(n))p(n)^{3})$, the literature describes more sophisticated approaches in complexity $O(p(n)\log(p(n)))$.[9][10][11][12][13] The quasilinear result first appeared seven years after Cook's original publication.
The use of SAT to prove the existence of an NP-complete problem can be extended to other computational problems in logic, and to completeness for other complexity classes. The quantified Boolean formula problem (QBF) involves Boolean formulas extended to include nested universal quantifiers and existential quantifiers for its variables. The QBF problem can be used to encode computation with a Turing machine limited to polynomial space complexity, proving that there exists a problem (the recognition of true quantified Boolean formulas) that is PSPACE-complete. Analogously, dependency quantified boolean formulas encode computation with a Turing machine limited to logarithmic space complexity, proving that there exists a problem that is NL-complete.[14][15]
Consequences
The proof shows that every problem in NP can be reduced in polynomial time (in fact, logarithmic space suffices) to an instance of the Boolean satisfiability problem. This means that if the Boolean satisfiability problem could be solved in polynomial time by a deterministic Turing machine, then all problems in NP could be solved in polynomial time, and so the complexity class NP would be equal to the complexity class P.
The significance of NP-completeness was made clear by the publication in 1972 of Richard Karp's landmark paper, "Reducibility among combinatorial problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its intractability, are NP-complete.[2]
Karp showed each of his problems to be NP-complete by reducing another problem (already shown to be NP-complete) to that problem. For example, he showed the problem 3SAT (the Boolean satisfiability problem for expressions in conjunctive normal form (CNF) with exactly three variables or negations of variables per clause) to be NP-complete by showing how to reduce (in polynomial time) any instance of SAT to an equivalent instance of 3SAT.[16]
Garey and Johnson presented more than 300 NP-complete problems in their book Computers and Intractability: A Guide to the Theory of NP-Completeness,[6] and new problems are still being discovered to be within that complexity class.
Although many practical instances of SAT can be solved by heuristic methods, the question of whether there is a deterministic polynomial-time algorithm for SAT (and consequently all other NP-complete problems) is still a famous unsolved problem, despite decades of intense effort by complexity theorists, mathematical logicians, and others. For more details, see the article P versus NP problem.
References
1. Cook, Stephen (1971). "The complexity of theorem proving procedures". Proceedings of the Third Annual ACM Symposium on Theory of Computing. pp. 151–158. doi:10.1145/800157.805047. ISBN 9781450374644. S2CID 7573663.
2. Karp, Richard M. (1972). "Reducibility Among Combinatorial Problems" (PDF). In Raymond E. Miller; James W. Thatcher (eds.). Complexity of Computer Computations. New York: Plenum. pp. 85–103. ISBN 0-306-30707-3.
3. T. P. Baker; J. Gill; R. Solovay (1975). "Relativizations of the P = NP question". SIAM Journal on Computing. 4 (4): 431–442. doi:10.1137/0204037.
4. Dekhtiar, M. (1969). "On the impossibility of eliminating exhaustive search in computing a function relative to its graph". Proceedings of the USSR Academy of Sciences (in Russian). 14: 1146–1148.
5. Levin, Leonid (1973). "Универсальные задачи перебора" [Universal search problems]. Problems of Information Transmission (in Russian). 9 (3): 115–116. Translated into English by Trakhtenbrot, B. A. (1984). "A survey of Russian approaches to perebor (brute-force searches) algorithms". Annals of the History of Computing. 6 (4): 384–400. doi:10.1109/MAHC.1984.10036. S2CID 950581. Translation see appendix, p.399-400.
6. Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman. ISBN 0-7167-1045-5.
7. This column uses the big O notation.
8. The number of literals in each clause does not depend on $n$, except for the last table row, which leads to a clause with $O(p(n))$ literals.
9. Claus-Peter Schnorr (Jan 1978). "Satisfiability is quasilinear complete in NQL" (PDF). Journal of the ACM. 25 (1): 136–145. doi:10.1145/322047.322060. S2CID 1929802.
10. Nicholas Pippenger and Michael J. Fischer (Apr 1979). "Relations among complexity measures" (PDF). Journal of the ACM. 26 (2): 361–381. doi:10.1145/322123.322138. S2CID 2432526.
11. John Michael Robson (Feb 1979). A new proof of the NP completeness of satisfiability. Proceedings of the 2nd Australian Computer Science Conference. pp. 62–70.
12. John Michael Robson (May 1991). "An $O(T\log T)$ reduction from RAM computations to satisfiability". Theoretical Computer Science. 82 (1): 141–149. doi:10.1016/0304-3975(91)90177-4.
13. Stephen A. Cook (Jan 1988). "Short propositional formulas represent nondeterministic computations" (PDF). Information Processing Letters. 26 (5): 269–270. doi:10.1016/0020-0190(88)90152-4.
14. Gary L. Peterson; John H. Reif (1979). "Multiple-person alternation". In Ronald V. Book; Paul Young (eds.). Proc. 20th Annual Symposium on Foundations of Computer Science (SFCS). IEEE. pp. 348–363.
15. Gary Peterson; John Reif; Salman Azhar (Apr 2001). "Lower bounds for multiplayer noncooperative games of incomplete information". Computers & Mathematics with Applications. 41 (7–8): 957–992. doi:10.1016/S0898-1221(00)00333-3.
16. First modify the proof of the Cook–Levin theorem, so that the resulting formula is in conjunctive normal form, then introduce new variables to split clauses with more than 3 atoms. For example, the clause $(A\lor B\lor C\lor D)$ can be replaced by the conjunction of clauses $(A\lor B\lor Z)\land (\lnot Z\lor C\lor D)$, where $Z$ is a new variable that will not be used anywhere else in the expression. Clauses with fewer than three atoms can be padded; for example, $(A\lor B)$ can be replaced by $(A\lor B\lor B)$.
| Wikipedia |
sdo7.lws-sdo-workshops.org
Home Abstracts Submitted Abstracts
Submitted Abstracts
There are 146 abstracts
Dynamical Processes At Vertical Current Sheets Behind Erupting Flux Ropes
Author(s): Rui Liu
Institution(s): University of Science and Technology of China
We report in this presentation two solar eruptive events, in both of which a vertical current sheet (VCS) is detected in the wake of the erupting flux rope in the SDO/AIA 131~{\AA} passband. Plasma blobs are observed to move along the VCS bidirectionally. In the 2011 March 14 event, the VCS is observed after a nonthermal hard X-ray (HXR) burst which is due to a loop-loop interaction leading up to the ejection of the flux rope, while in the 2012 July 19 event, the VCS is observed following the impulsive acceleration of the erupting flux rope but prior to the onset of a nonthermal HXR/microwave burst. The initial, slow acceleration of the erupting structure is associated with the slow elevation of a thermal looptop HXR source and the subsequent, impulsive acceleration is associated with the downward motion of the looptop source, the latter of which therefore reflects a catastrophic release of magnetic free energy in corona. However, the poor temporal correlation between VCSs and nonthermal HXR/microwave bursts suggest that neither the VCS nor the magnetic islands (i.e., the blobs) in the tearing mode is the primary accelerator for nonthermal electrons emitting HXRs/microwaves. In the 2012 July 19 event, we find that the blobs moving downward within the VCS into the cusp region and the flare loops retracting from the cusp region make a continuous process, with the former apparently initiating the latter. This provides a 3D perspective on reconnections at the VCS and implies a transportation of magnetic twist to the lower atmosphere via Alfv\'{e}n waves. We also identify a dark void which moves within the VCS toward the flare arcade, which suggests that dark voids observed in supra-arcade downflows are also magnetic islands formed within the VCS in the high corona and move with the downward reconnection outflow.
Bi-directional Ejections and Loop Contractions in an Eruptive M7.7 Solar Flare: Evidence of Particle Acceleration and Heating in Magnetic Reconnection Outflows
Author(s): Wei Liu, Qingrong Chen, Vahe Petrosian
Institution(s): (1) Lockheed Martin Solar and Astrophysics Laboratory; (2) Hansen Experimental Physics Laboratory, Stanford University; (3) Department of Physics, Stanford University
Where particle acceleration and plasma heating take place in relation to magnetic reconnection is a fundamental question for solar flares. We report analysis of an M7.7 flare on 2012 July 19 observed by SDO/AIA and RHESSI. Bi-directional ejections in forms of plasmoids and contracting cusp-shaped loops originate between an erupting flux rope and underlying flare loops at speeds of typically 200-300 km/s up to 1050 km/s. These ejections are associated with spatially separated double coronal X-ray sources with centroid separation decreasing with energy. The highest temperature is located near the nonthermal X-ray loop-top source well below the original heights of contracting cusps near the inferred reconnection site. These observations suggest that the primary loci of particle acceleration and plasma heating are in the reconnection outflow regions, rather than the reconnection site itself. This supports particle acceleration by turbulence, shocks, and/or collapsing traps associated with reconnection outflows, not by a DC electric field within the reconnection region. In addition, there is an initial ascent of the X-ray and EUV loop-top source prior to its recently recognized descent. The impulsive phase onset is delayed by 10 minutes from the start of the descent, but coincides with the rapid speed increases of the upward plasmoids, the individual loop shrinkages, and the overall loop-top descent, suggestive of an intimate relation of the energy release rate and reconnection outflow speed.
The Coronal Pulse Identification and Tracking Algorithm (CorPITA)
Author(s): Long, David M. (1), Bloomfield, D. Shaun (2), Feeney-Barry, R. (2), Gallagher, Peter T. (2), Pérez-Suárez, David (2,3)
Institution(s): (1) Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK. (2) School of Physics, Trinity College Dublin, Dublin 2, Ireland, (3) Finnish Meteorological Institute, POB 503, 00101 Helsinki, Finland
The Coronal Pulse Identification and Tracking Algorithm (CorPITA) is an automated technique for detecting and analysing "EIT Waves" in data from the Solar Dynamics Observatory (SDO) spacecraft. CorPITA will operate as part of the Heliophysics Event Knowledgebase (HEK), providing unbiased, near-real-time identification of coronal pulses. When triggered by the start of a solar flare, the algorithm uses an intensity profile technique radiating from the source of the flare to examine the entire solar disk. If a pulse is identified, the kinematics and morphological variation of the pulse are determined for all directions along the solar surface. Here, CorPITA is applied to a test data-set encompassing a series of solar flares of different classes from 13-20 February 2011. This allows the effectiveness of the algorithm in dealing with the varied morphology of different eruptions to be characterised. The automated nature of this approach will enable an unbiased examination of "EIT Waves" and their relationship to coronal mass ejections.
Measuring the magnetic field strength of the quiet solar corona using "EIT waves"
Author(s): Long, David M. (1), Williams, David R. (1), Régnier, Stéphane (2), Harra, Louise K. (1)
Institution(s): (1) Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK. (2) Jeremiah Horrocks Institute, University of Central Lancashire, Preston, Lancashire, PR1 2HE, UK
Variations in the propagation of globally-propagating disturbances (commonly called "EIT waves") through the low solar corona offer a unique opportunity to probe the plasma parameters of the solar atmosphere. Here, high-cadence observations of two "EIT wave" events taken using SDO/AIA are combined with spectroscopic measurements from Hinode/EIS and used to examine the variability of the quiet coronal magnetic field strength. The combination of pulse kinematics from AIA and plasma density from EIS is used to show that the magnetic field strength is in the range ~2-6G in the quiet corona. The magnetic field estimates are then used to determine the height of the pulse, allowing a direct comparison with theoretical values obtained from SDO/HMI magnetic field using PFSS and local-domain extrapolations. While local-scale extrapolations predict heights inconsistent with prior measurements, the agreement between observations and the PFSS model indicates that "EIT waves" are a global phenomenon influenced by global-scale magnetic field.
EUV Coronal Holes as a Proxy for Open Magnetic Field Regions
Author(s): Lowder, Chris; Qiu, Jiong; Leamon, Robert
Institution(s): Montana State University, Bozeman, MT
Coronal holes are regions marked by a decreased intensity in the extreme ultraviolet and x-ray wavelengths. Associated with regions of open magnetic field, plasma is allowed to escape along open magnetic field lines resulting in a rarefied plasma below. This study seeks to quantify the relationship between boundaries of coronal holes and open magnetic field. Using a combination of STEREO and SDO data in EUV wavelengths, we can provide a full solar surface map of coronal hole boundaries. These boundaries in conjunction with charts of radial magnetic field can be used to calculate open magnetic fluxes. Direct comparison is made with potential magnetic extrapolations as well as a non-potential, magneto-frictional model. There is strong agreement both in the geometry of regions as well as associated magnetic fluxes. These data provide a unique opportunity to study the far side dynamics of coronal holes and open magnetic field evolution.
Coronal and Solar Wind Ion heating by dispersive Alfven waves -- 2.5D hybrid simulations
Author(s): Maneva, Y. (1,2), Ofman, L. (1,2) and Vinas, A. (2)
Institution(s): (1) Catholic University of America, Washington DC, USA; (2) NASA Goddard Space Flight Center, MD, USA
We perform 2.5D hybrid simulations to model the preferential heating and differential acceleration of minor ions as observed by remote sensing in coronal holes and measured in situ in the fast solar wind at various heliospheric distances. We consider a low-beta plasma consisting of fluid electrons, particle-in-cell protons and He++ ions and different spectra of parallel propagating Alfven-cyclotron waves as initial energy source for the ion heating and acceleration. For fixed low wave-numbers the generated wave spectrum generally shifts towards higher frequencies in multi-species plasma. This effect is further enhanced when differential streaming is present due to the expected preferential acceleration of heavy ions in coronal holes. We use the results from the cold plasma linear theory to initialize the nonlinear 2.5D hybrid simulations and compare the resulting ion heating, temperature anisotropies and differential streaming when the initial wave spectra belongs to the alpha-cyclotron and the proton-cyclotron dispersion branches, with and without initial relative drifts, and study the nonlinear 2D effects, extending our previous 1D hybrid studies. Finally, we investigate the effect of a gradual solar wind expansion, consider its influence on the wave-particle interactions and discuss its implications for non-adiabatic perpendicular cooling for both ion species.
Approach to Integrate Global-Sun Models of Magnetic Flux Emergence and Transport for Space Weather Studies
Author(s): Nagi Nicolas Mansour, NASA, Moffett Field, CA; and A. Wray, P. Mehrotra, C. Henney, N. arge, C. Manchester, H. Godinez, J. Koller, A. Kosovichev, P. Scherrer, J. Zhao, R. Stein, T. Duvall, and Y. Fan
Institution(s): NASA Ames Research Center
The Sun lies at the center of space weather and is the source of its variability. The primary input to coronal and solar wind models is the activity of the magnetic field in the solar photosphere. Recent advancements in solar observations and numerical simulations provide a basis for developing physics-based models for the dynamics of the magnetic field from the deep convection zone of the Sun to the corona with the goal of providing robust near real-time boundary conditions at the base of space weather forecast models. The goal is to develop new strategic capabilities that enable characterization and prediction of the magnetic field structure and flow dynamics of the Sun by assimilating data from helioseismology and magnetic field observations into physics-based realistic magnetohydrodynamics (MHD) simulations. The integration of first-principle modeling of solar magnetism and flow dynamics with real-time observational data via advanced data assimilation methods is a new, transformative step in space weather research and prediction. This approach will substantially enhance an existing model of magnetic flux distribution and transport developed by the Air Force Research Lab. The development plan is to use the Space Weather Modeling Framework (SWMF) to develop Coupled Models for Emerging flux Simulations (CMES) that couples three existing models: (1) an MHD formulation with the anelastic approximation to simulate the deep convection zone (FSAM code), (2) an MHD formulation with full compressible Navier-Stokes equations and a detailed description of radiative transfer and thermodynamics to simulate near-surface convection and the photosphere (Stagger code), and (3) an MHD formulation with full, compressible Navier-Stokes equations and an approximate description of radiative transfer and heating to simulate the corona (Module in BATS-R-US). CMES will enable simulations of the emergence of magnetic structures from the deep convection zone to the corona. Finally, a plan will be summarized on the development of a Flux Emergence Prediction Tool (FEPT) in which helioseismology-derived data and vector magnetic maps are assimilated into CMES that couples the dynamics of magnetic flux from the deep interior to the corona.
Modeling small-scale flux emergence from the Convection Zone into the Corona
Author(s): Juan Martinez-Sykora
Institution(s): Lockheed Martin Solar & Astrophysics Lab
High resolution telescopes reveal small-scale flux in the photosphere and roughly 20% of these events seem to reach and impact the chromosphere. As a result of such flux emergence, reconnection with the ambient field or other processes that do not necessarily involve reconnection but nevertheless impact the chromosphere and lower corona may occur. I am going to present recent simulations that show small-scale flux emergence in a computational domain that captures the upper-convection zone, photosphere, chromosphere and lower corona. As we will see, small scale activity is strongly dependent on the physics that dominate in the various layers of the atmosphere, such as thermo-dynamics, radiative transfer in the photosphere and thermal conduction along field lines in the corona (we use for that Bifrost). In addition, small scale activity is also dependent on the ambient field which changes rapidly with height both in strength and topology through the different layers of the solar atmosphere. Some of these small-scale events erupts into the atmosphere destabilizes the pre-existing magnetic field and drives it to new configurations.
Understanding Solar Eruptive Events
Author(s): Mason, James P. (1); Hock, Rachel A. (2); Woods, Thomas N. (1); Thompson, Barbara J. (3); Webb, David F. (4); Caspi, Amir (1)
Institution(s): (1) LASP / University of Colorado, Boulder; (2) Kirtland AFB; (3) NASA GSFC; (4) Boston College
Coronal dimming is studied using data from the EUV Variability Experiment (EVE) and the Atmospheric Imaging Assembly (AIA), both onboard the Solar Dynamics Observatory (SDO). Dimming can be caused by a number of physical processes, including mass loss (e.g. coronal mass ejections), obscuration of bright features (e.g. flaring loops) by dark features (e.g. filament eruptions), global scale waves, and changes of temperature in the emitting plasma. Each of these processes have unique spectral signatures, which EVE and AIA are well suited to observe. We are building a method for isolating the signature indicative of mass loss, which is thought to be correlated with the kinetics of coronal mass ejections. Our analysis of the M9 flare on August 4, 2011 are shown as an example of all four of these physical processes and their spectral signatures.
Chromospheric Waves and Oscillations in Sunspots
Author(s): R. A. Maurya and J. Chae
Institution(s): Astronomy Program, Seoul National University, Seoul 151-747, Korea
We studied the chromospheric oscillations in and around a sunspot of the active region NOAA 11242 using high spectral and spatial resolution observations in the spectral lines H$\alpha$ and Ca II 8542\AA~obtained from the Fast Imaging Solar Spectrograph (FISS) of 1.6 meter New Solar Telescope (NST) at Big Bear Solar Observatory. A suitable bisector method is applied to the spectral observations, to construct the chromospheric Doppler Velocity maps. Time series analysis of Doppler maps, in both the spectral bands, revealed enhanced high frequency oscillations inside the umbra of the sunspot. The frequency of oscillations gradually decreases from the umbra to outward. We have found clear evidence of two boundaries for the peak power frequency transformation, one of which occurs close to the umbral and penumbral boundary, and the other near the penumbral and super-penumbral boundary of the sunspot. The oscillation power is found to be associated with magnetic field strength and inclination, although they showed different relationships in different frequency bands.
Warning: Illegal string offset 'active' in /home/content/12/6998712/html/sdo7/templates/ja_purity/html/pagination.php on line 129
Copyright © 2019 sdo7.lws-sdo-workshops.org. All Rights Reserved.
Submit questions about the workshop to [email protected]
Webmaster: Kevin M. Addison | CommonCrawl |
\begin{document}
\newdateformat{mydate}{\THEDAY~\monthname~\THEYEAR}
\title
[2D Euler when velocity grows at infinity]
{Well-posedness of the 2D Euler equations when velocity grows at infinity}
\author{Elaine Cozzi} \address{Department of Mathematics, 368 Kidder Hall, Oregon State University, Corvallis, OR 97330, U.S.A.} \email{[email protected]}
\author{James P Kelliher} \address{Department of Mathematics, University of California, Riverside, USA} \email{[email protected]}
\subjclass[2010]{Primary 35Q35, 35Q92, 39A01}
\keywords{Euler equations, Transport equations}
\date{\today}
\begin{abstract} We prove the uniqueness and finite-time existence of bounded-vorticity solutions to the 2D Euler equations having velocity growing slower than the square root of the distance from the origin, obtaining global existence for more slowly growing velocity fields. We also establish continuous dependence on initial data. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\label{S:Introduction}
\noindent In \cite{Serfati1995A}, Ph. Serfati proved the existence and uniqueness of Lagrangian solutions to the 2D Euler equations having bounded vorticity and bounded velocity (for a rigorous proof, see \cite{AKLN2015}). Our goal here is to discover how rapidly the velocity at infinity can grow (keeping the vorticity bounded) and still obtain existence or uniqueness of solutions to the 2D Euler equations.
Serfati's approach in \cite{Serfati1995A} centered around a novel identity he showed held for bounded vorticity, bounded velocity solutions. We write this identity, which we call \textit{Serfati's identity} or the \textit{Serfati identity}, in the form \begin{align}\label{e:SerfatiId}
\begin{split}
u^j(t&) - (u^0)^j
= (a_\ensuremath{\lambda} K^j) *(\omega(t) - \omega^0)
- \int_0^t \pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp \brac{(1 - a_\ensuremath{\lambda}) K^j}}
\mathop{* \cdot} (u \otimes u)(s) \, ds,
\end{split} \end{align} $j = 1, 2$. Here, $u$ is a divergence-free velocity field, with $\omega = \curl u := \ensuremath{\partial}_1 u^2 - \ensuremath{\partial}_2 u^1$ its vorticity, and $
K(x) := (2 \pi)^{-1} x^\perp \abs{x}^{-2} $ is the Biot-Savart kernel,
$x^\perp := (-x_2, x_1)$. The function $a_\ensuremath{\lambda}$, $\ensuremath{\lambda} > 0$, is a scaled radial cutoff function (see \cref{D:Radial}).
Also, we have used the notation, \begin{align*}
\begin{array}{ll}
v \mathop{* \cdot} w
= v^i * w^i
&\mbox{if $v$ and $w$ are vector fields}, \\
A \mathop{* \cdot} B
= A^{ij} * B^{ij}
&\mbox{if $A$, $B$ are matrix-valued functions on $\ensuremath{\BB{R}}^2$},
\end{array} \end{align*} where $*$ denotes convolution and where repeated indices imply summation.
The Biot-Savart law, $u = K * \omega$, recovers the unique divergence-free vector field $u$ decaying at infinity from its vorticity (scalar curl) $\omega$. It does not apply to bounded vorticity, bounded velocity\xspace solutions---indeed this is the greatest
difficulty to overcome with such solutions---but because of the manner in which $a_\ensuremath{\lambda}$ cuts off the Biot-Savart kernel in \cref{e:SerfatiId}, we see that all the terms in Serfati's identity are finite. In fact, there is room for growth at infinity both in the vorticity and the velocity, though to avoid excessive complications, we only treat growth in the velocity. Because $\ensuremath{\nabla} \ensuremath{\nabla}^\perp \brac{(1 - a_\ensuremath{\lambda}) K^j(x)}$ decays like $\smallabs{x}^{-3}$, we see that as long as $\abs{u(x)}$ grows more slowly than $\smallabs{x}^{1/2}$, all the terms in \cref{e:SerfatiId} will at least be finite.
In brief, we will establish uniqueness of solutions having $o(\smallabs{x}^{1/2})$ growth along with finite-time existence. We will obtain global existence only for much more slowly growing velocity fields.
To give a precise statement of our results, we must first describe the manner in which we prescribe the growth of the velocity field at infinity and give our formulation of a weak solution, including the function spaces in which the solution is to lie.
In what follows, an \textit{increasing function} means nondecreasing; that is, not necessarily \textit{strictly} increasing.
We define four types of increasingly restrictive bounds on the growth of the velocity, as follows: \begin{definition}\label{D:GrowthBound}[Growth bounds]
\begin{enumerate}[(i)]
\item
A \textit{pre-growth bound\xspace} is
a function $h \colon [0, \ensuremath{\infty}) \to (0, \ensuremath{\infty})$
that is concave,
increasing,
differentiable on $[0, \ensuremath{\infty})$,
and twice continuously differentiable on $(0, \ensuremath{\infty})$.
\item
A \textit{growth bound\xspace} $h$ is a pre-growth bound\xspace for which
$
\int_1^\ensuremath{\infty} h(s) s^{-2} \, ds < \ensuremath{\infty}
$.
\item
A \textit{well-posedness growth bound\xspace} $h$ is a growth bound\xspace for which $h^2$ is also a growth bound\xspace.
\item
Let $h$ be a well-posedness growth bound\xspace and define
$H[h] \colon (0, \ensuremath{\infty}) \to (0, \ensuremath{\infty})$ by
\begin{align}\label{e:HDef}
H[h](r)
:= \int_r^\ensuremath{\infty} \frac{h(s)}{s^2} \, ds,
\end{align}
noting that the condition in $(ii)$ insures $H[h]$ and $H[h^2]$
are well-defined.
We show in \cref{L:EProp} that there always exists a
continuous, convex function $\mu$ with $\mu(0) = 0$
for which
\begin{align*}
\displaystyle
E(r)
:= \pr{1 + r^{\frac{1}{2}} H[h^2] \pr{r^{\frac{1}{2}}}}^2 r
\le \mu(r).
\end{align*}
We call $h$ a global well-posedness growth bound\xspace if for some such $\mu$,
\begin{align}\label{e:omuOsgoodAtInfinity}
\int_1^\ensuremath{\infty}
\frac{dr}
{\mu(r)}
= \ensuremath{\infty}.
\end{align}
\end{enumerate} \end{definition}
\begin{definition}\label{D:SerfatiVelocity}
Let $h$ be a growth bound\xspace.
We denote by $S_h$ the Banach space of all divergence-free vector
fields, $u$, on $\ensuremath{\BB{R}}^2$ for which $u/h$ and $\omega(u)$ are bounded,
where $\omega(u) := \ensuremath{\partial}_1 u^2 - \ensuremath{\partial}_2 u^1$ is the vorticity.
We endow $S_h$ with the norm
\begin{align*}
\norm{u}_{S_h}
:= \norm{u/h}_{L^\ensuremath{\infty}} + \norm{\omega(u)}_{L^\ensuremath{\infty}}.
\end{align*} \end{definition}
\begin{definition}[Radial cutoff function]\label{D:Radial}
Let $a$ be a radially symmetric function in $C_c^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)$
taking values in $[0, 1]$, supported in $\overline{B_1(0)}$,
with $a \equiv 1$ on $\overline{B_{1/2}(0)}$.
(By $B_r(x)$ we mean the open ball of radius $r$ centered at $x$.)
We call any such function a \textit{radial cutoff function.}
For any $\ensuremath{\lambda} > 0$ define
\begin{align*}
a_\lambda (x)
:= a \pr{\frac{x}{\lambda}}.
\end{align*} \end{definition}
\begin{definition}[Lagrangian solution]\label{D:ESol}
Fix $T > 0$ and let $h$ be a growth bound\xspace. Assume that
$u \in C(0, T; S_h)$,
let $\omega = \curl u := \ensuremath{\partial}_1 u^2 - \ensuremath{\partial}_2 u^1$,
and let $X$ be the unique flow map for $u$.
We say that $u$ is a solution to the Euler equations
in $S_h$
without forcing and with initial velocity $u^0 = u|_{t = 0}$ in $S_h$
if the following conditions hold:
\begin{enumerate}
\item
$
\omega = \omega^0 \circ X^{-1}
$
on $[0, T] \times \ensuremath{\BB{R}}^2$,
where $\omega^0 := \curl u^0$;
\item
Serfati's identity \cref{e:SerfatiId} holds for all $\ensuremath{\lambda} > 0$.
\end{enumerate} \end{definition}
The existence and uniqueness of the flow map $X$ in \cref{D:ESol} is assured by
$u \in C([0, T]; S_h)$ (see \cref{L:FlowWellPosed}). It also then follows
easily that
the vorticity equation, $\ensuremath{\partial}_t \omega + u \cdot \ensuremath{\nabla} \omega = 0$,
holds in the sense of distributions---so $u$ is also a weak
Eulerian solution. (See also the discussion of Eulerian
versus Lagrangian solutions in \cref{S:WeakFormulation}.)
Our main results are \cref{T:Existence} through \cref{T:aT}.
\begin{theorem}\label{T:Existence}
Let $h$ be any well-posedness growth bound\xspace.
For any $u^0 \in S_h$ there is a $T > 0$
such that there exists a
solution to the Euler equations in $S_h$ on $[0, T]$
as in \cref{D:ESol}. If $h$ is a global well-posedness growth bound\xspace
then $T$ can be chosen to be arbitrarily large. \end{theorem}
\begin{theorem}\label{T:ProtoUniqueness}
Let $h$ be any well-posedness growth bound\xspace and let $\zeta \ge h$ be any growth bound\xspace for which
$\zeta/h$ and $\zeta h$ are also growth bounds\xspace.
Let $u_1^0, u_2^0 \in S_h$ and let $\omega^0_1$, $\omega^0_2$
be the corresponding
initial vorticities.
Assume that there exist solutions,
$u_1$, $u_2$, to the Euler equations in $S_h$ on $[0, T]$
with initial velocities $u_1^0$, $u_2^0$, and let $\omega_1, \omega_2$
and $X_1, X_2$ be the corresponding vorticities and flow maps.
Let
\begin{align}\label{e:aT}
a(T)
&:= \smallnorm{(u_1^0 - u_2^0)/\zeta}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
+ \norm{J/\zeta}_{L^\ensuremath{\infty}((0, T) \times \ensuremath{\BB{R}}^2)},
\end{align}
where
\begin{align}\label{e:J}
J(t, x)
&= \pr{(a_{h(x)} K) * (\omega_1^0 - \omega_2^0) \circ X_1^{-1}(t)}(x)
- \pr{(a_{h(x)} K) * (\omega_1^0 - \omega_2^0)}(X_1(t, x)).
\end{align}
Define
\begin{align*}
\eta(t)
&:= \norm{\frac{X_1(t, x) - X_2(t, x)}
{\zeta(x)}}_{L^\ensuremath{\infty}_x(\ensuremath{\BB{R}}^2)}, \\
L(t)
&:=
\norm{\frac{u_1(t, X_1(t, x)) - u_2(t, X_2(t, x))}{\zeta(x)}}
_{L^\ensuremath{\infty}_x(\ensuremath{\BB{R}}^2)}, \\
M(t)
&:= \int_0^t
L(s) \, ds, \\
Q(t)
&:= \norm{(u_1(t) - u_2(t))/\zeta}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}.
\end{align*}
Then $\eta(t) \le M(t)$ and
\begin{align}\label{e:LProtoBoundForUniqueness}
\int_{ta(T)}^{M(t)} \frac{dr}{\ol{\mu}(C(T) r) + r}
\le C(T) t
\end{align}
for all $t \in [0, T]$, where
\begin{align}\label{e:omu}
\ol{\mu}(r) :=
\begin{cases}
-r \log r &\text{if } r \le e^{-1}, \\
e^{-1} &\text{if } r > e^{-1}.
\end{cases}
\end{align}
We have
$M(T) \to 0$ and $\norm{Q}_{L^\ensuremath{\infty}(0, T)} \to 0$ as $a(T) \to 0$.
Explicitly,
\begin{align*}
M(t)
\le (t a(T))^{e^{-C(T)t}}
\end{align*}
holds until $M(t)$ increases to $C(T) e^{-1}$.
If $ta(T) \ge C(T)e^{-1}$ then
\begin{align*}
M(t)
\le C(T)t a(T).
\end{align*}
For all $t \ge 0$, we have
\begin{align*}
Q(t)
\le \brac{a(T) + C (T) \ol{\mu}(C(T) M(t))} e^{C(T) t}.
\end{align*} \end{theorem}
Uniqueness is an immediate corollary of \cref{T:ProtoUniqueness}: \begin{cor}\label{C:Uniqueness}
Let $h$ be any well-posedness growth bound\xspace.
Then solutions to the 2D Euler equations on $[0, T]$ in $S_h$
are unique. \end{cor} \begin{proof}
Apply \cref{T:ProtoUniqueness} with
$u_1^0 = u_2^0 = u(0)$ so that $a(T) = 0$ and set $\zeta = h$,
noting that $\zeta/h = 1$ and $\zeta h = h^2$ are both growth bounds\xspace. \end{proof}
\cref{C:MainResult} applies \cref{T:Existence,C:Uniqueness} to two explicit well-posedness growth bounds\xspace. The proof is simply a matter of verifying that the two given well-posedness growth bounds\xspace actually satisfy the pertinent conditions in \cref{D:GrowthBound}. (Since the proof is very ``calculational,'' we defer it to \cref{S:Corollary}.) We note that the existence of solutions for $h_2$ was shown in \cite{Cozzi2015} using different techniques.
\begin{cor}\label{C:MainResult}
Let $h_1(r) := (1 + r)^\ensuremath{\alpha}$ for some $\ensuremath{\alpha} \in [0, 1/2)$,
$h_2(r) := \log^{\frac{1}{4}}(e + r)$.
If $u^0 \in S_{h_j}$, $j = 1$ or $2$, then there exists a
unique solution to the Euler equations in $S_{h_j}$ on $[0, T]$
as in \cref{D:ESol} for some $T > 0$.
If $u^0 \in S_{h_2}$ then $T$ can be made arbitrarily large. \end{cor}
Loosely speaking, \cref{C:MainResult} says that solutions in $S_h$ are unique
and exist for finite time as long as $h^2$ is sublinear, while
global-in-time solutions exist for velocities growing
very slowly at infinity. These slowly growing velocities are somewhat
analogous to the ``slightly unbounded'' vorticities of Yudovich
\cite{Y1995}, which extends the uniqueness result for bounded
vorticities in \cite{Y1963}.
If $u_1^0 \ne u_2^0$ then $a(T)$ does not vanish. If $u_1^0, u_2^0 \in S_h$ for some well-posedness growth bound\xspace $h$, and $u_1^0 - u_2^0$ is small in $S_h$, we might expect $\norm{u_1(t) - u_2(t)}_{S_h}$ to remain small at least for some time. The $S_h$ norm, however, includes the $L^\ensuremath{\infty}$ norm of $\omega_1(t) - \omega_2(t)$, with each vorticity being transported by different flow maps. Hence, we should expect $\norm{\omega_1(t) - \omega_2(t)}_{L^\ensuremath{\infty}}$ to be of the same order as $\norm{\omega_j(t)}_{L^\ensuremath{\infty}}$, $j = 1, 2$, immediately after time zero. Thus, it is too much to ask for continuous dependence on initial data in the $S_h$ norm. In this regard, the situation is the same as for the classical bounded-vorticity solutions of Yudovich \cite{Y1963}, and has nothing to do with lack of decay at infinity. The best we can hope to obtain is a bound on $(u_1(t) - u_2(t))/\zeta$ in $L^\ensuremath{\infty}$---and so, by interpolation, in $C^\ensuremath{\alpha}$ for all $\ensuremath{\alpha} < 1$.
To obtain continuous dependence on initial data or control how changes at a distance from the origin affect the solution near the origin (\textit{effect at a distance}, for short), we can employ the bound on $Q$ in \cref{T:ProtoUniqueness} to obtain a bound on how far apart the two solutions become, weighted by $\zeta$. For continuous dependence on initial data, $\zeta = h$ is most immediately pertinent; for controlling effect at a distance, $\zeta \geq h$ is better.
The simplest form of continuous dependence on initial data, which follows from \cref{T:aTSimple} applied to \cref{T:ProtoUniqueness}, shows that if the initial velocities are close in $S_\zeta$ then they remain close (in a weighted $L^\ensuremath{\infty}$ space) for some time.
\begin{theorem}\label{T:aTSimple}
Make the assumptions in \cref{T:ProtoUniqueness}.
Then
\begin{align*}
a(T)
\le C \norm{u_1^0 - u_2^0}_{S_\zeta}.
\end{align*} \end{theorem}
While \cref{T:aTSimple} gives a theoretically meaningful measure of continuous dependence on initial data, the assumption that the initial velocities are close in $S_\zeta$ is overstrong. For instance, it would not apply to two vortex patches that do not quite coincide. One approach, motivated in part by this vortex patch example, is to make some assumption on the closeness of the initial vorticities locally uniformly in an $L^p$ norm for $p < \ensuremath{\infty}$, as was done in \cite{TaniuchiEtAl2010,AKLN2015}. This assumption is, however, unnecessary (and in our setting somewhat artificial) as shown for the special case of bounded velocity ($h \equiv 1$) in \cite{CK2016}.
In \cite{CK2016}, the focus was not on continuous dependence on initial data, per se, but rather on understanding the effect at a distance. Hence, we used a function, $\zeta(r) = (1 + r)^\ensuremath{\alpha}$ for any $\ensuremath{\alpha} \in (0, 1)$, in place of $h$ and obtained a bound on $(u_1(t) - u_2(t))/\zeta$ in terms of $(u_1^0 - u_2^0)/\zeta$, each in the $L^\ensuremath{\infty}$ norm. In \cref{T:aT}, we obtain a similar bound using very different techniques. Our need to assume bounded velocity ($h$ = 1) arises from our inability to obtain usable transport estimates for non-Lipschitz vector fields growing at infinity. It is not clear whether this is only a technical issue or represents some fundamental new phenomenon causing unbounded velocities to have less stability, in the sense of effect at a distance, than bounded velocities. (See \cref{R:TechnicalIssue}.)
\begin{theorem}\label{T:aT}
Let $u_1^0, u_2^0 \in S_1$
and let $\zeta$ be a growth bound\xspace.
Let $u_1$, $u_2$ be the corresponding solutions in $S_1$
on $[0, T]$
with initial velocities $u_1^0$, $u_2^0$.
Fix $\delta$, $\ensuremath{\alpha}$ with $0 < \delta < \ensuremath{\alpha} < 1$
and choose any $T^* > 0$ such that
\begin{align}\label{e:TStar}
T^*
< \min\bigset{T,
\frac{1 + \delta}{C C_0}
},
\end{align}
where
\begin{align}\label{e:C0C1}
C_0 = \smallnorm{u_1}_{L^\ensuremath{\infty}(0, T; S_1)}.
\end{align}
Then
\begin{align}\label{e:aTstarBound}
a(T^*)
\le C_1 \Phi_\ensuremath{\alpha} \pr{
T,
C
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}^\delta
},
\end{align}
where,
\begin{align}\label{e:Phialpha}
\Phi_\ensuremath{\alpha}(t, x)
:= x + x^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}}.
\end{align} \end{theorem}
\Ignore{ \begin{theorem}\label{T:aT}
Let $u_1^0, u_2^0 \in S_1$
and let $\zeta$ be a growth bound\xspace.
Let $u_1$, $u_2$ be the corresponding solutions in $S_1$
on $[0, T]$
with initial velocities $u_1^0$, $u_2^0$.
Fix $\delta$, $\ensuremath{\alpha}$ with $0 < \delta < \ensuremath{\alpha} < 1$
and choose any $T^* > 0$ such that
\begin{align}\label{e:TStar}
T^*
< \min\bigset{T, \frac{1}{C_0}
\log \pr{\frac{2 + \ensuremath{\alpha}}{2+\delta}}}.
\end{align}
Then
\begin{align}\label{e:aTstarBound}
a(T^*)
\le C_1 \Phi_\ensuremath{\alpha} \pr{
T,
C_{\delta, \ensuremath{\alpha}}
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}^\delta
},
\end{align}
where,
\begin{align}\label{e:C0C1}
C_0 = \smallnorm{u_1}_{L^\ensuremath{\infty}(0, T; S_1)}, \quad
C_{\delta, \ensuremath{\alpha}}
= \frac{C(T, \zeta)}{\delta (1 - \ensuremath{\alpha})}
\brac{\smallnorm{u_1^0}_{S_1} + \smallnorm{u_2^0}_{S_1} + 1},
\end{align}
and
\begin{align}\label{e:Phialpha}
\Phi_\ensuremath{\alpha}(t, x)
:= x + x^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}}.
\end{align} \end{theorem} }
\begin{remark}\label{R:aT}
To obtain a bound on $a(T)$, we iterate
the bound in \cref{e:aTstarBound}
$N$ times, where $T = N T^*$ (decreasing $T^*$ if necessary
so as to obtain
the minimum possible positive integer $N$),
applying the bound on $Q(t)$ from
\cref{T:ProtoUniqueness} after each step.
Since $T^*$ depends only upon $C_0$, which does not change,
we can always iterate this way.
In principle, the resulting bound can be made explicit, at least
for sufficiently small $a(T)$. \end{remark}
The bound on $u_1 - u_2$ given by the combination of \cref{T:ProtoUniqueness,T:aT} is not optimal, primarily because we could use an $L^1$-in-time bound on $J(t, x)$ in place of the $L^\ensuremath{\infty}$-in-time bound. (This would also be reflected in the bound on $a(T^*)$ in \cref{e:aTstarBound}.) This would not improve the bounds sufficiently, however, to justify the considerable complications to the proofs.
The issue of well-posedness of solutions to the 2D Euler equations with bounded vorticity but velocities growing at infinity was taken up recently by Elgindi and Jeong in \cite{ElgindiJeong2016}. They prove existence and uniqueness of such solutions for velocity fields growing \textit{linearly} at infinity under the assumption that the vorticity has $m$-fold symmetry for $m \ge 3$. We study here solutions with no preferred symmetry, and our approach is very different; nonetheless, aspects of our uniqueness argument were influenced by Elgindi's and Jeong's work. In particular, the manner in which they first obtain elementary but useful bounds on the flow map inspired \cref{L:FlowBounds}, and the bound in \cref{P:Morrey} is the analog of Lemma 2.8 of \cite{ElgindiJeong2016} (obtained differently under different assumptions).
Finally, we remark that it would be natural to combine the approach in \cite{ElgindiJeong2016} with our approach here to address the case of two-fold symmetric vorticities ($m = 2$). The goal would be to obtain Elgindi's and Jeong's result, but for velocities growing infinitesimally less than linearly at infinity. A similar argument might also work for solutions to the 2D Euler equations in a half plane having sublinear growth at infinity.
This paper is organized as follows: We first make a few comments on our formulation of a weak solution in \cref{S:WeakFormulation}. \crefrange{S:GBs}{S:FlowMap} contain preliminary material establishing useful properties of growth bounds\xspace, estimates on the Biot-Savart law and locally log-Lipschitz velocity fields, and bounds on flow maps for velocity fields growing at infinity. We establish the existence of solutions, \cref{T:Existence}, in \cref{S:Existence}. We prove \cref{T:ProtoUniqueness} in \cref{S:Uniqueness}, thereby establishing the uniqueness of solutions, \cref{C:Uniqueness}. In \cref{S:ContDep}, we prove \cref{T:aTSimple,T:aT}, establishing continuous dependence on initial data and controlling the effect of changes at a distance.
In \cref{S:LP}, we employ Littlewood-Paley theory to establish estimates in negative \Holder spaces for $u \in L^\ensuremath{\infty}(0, T; S_1)$. These estimates are used in the proof of \cref{T:aT} in \cref{S:ContDep}.
Finally, \cref{S:Corollary} contains the proof of \cref{C:MainResult}.
\section{Some comments on our weak formulation}\label{S:WeakFormulation} \noindent In our formulation of a weak solution in \cref{D:ESol} we made the assumption that Serfati's identity holds. Yet formally, Serfati's identity and the Euler equations are equivalent, and with only mild assumptions on regularity and behavior at infinity, they are rigorously equivalent. We do not purse this equivalence in any detail here, as it would be a lengthy distraction, we merely outline the formal argument.
That a solution to the Euler equations must satisfy Serfati's identity for smooth, compactly supported solutions (and hence also formally) is shown in Proposition 4.1 of \cite{AKLN2015}. \Ignore{ An easy way to show the reverse implication is to differentiate both sides of \cref{e:SerfatiId}, an identity that holds for all $\ensuremath{\lambda} > 0$, with respect to $\ensuremath{\lambda}$.
This gives \begin{align*}
\varphi_\ensuremath{\lambda}^j * (\omega(t) - \omega^0)
+ \int_0^t (\ensuremath{\nabla} \ensuremath{\nabla}^\perp \varphi_\ensuremath{\lambda}^j) \mathop{* \cdot} (u \otimes u)(s)
\, ds = 0, \end{align*} where \begin{align*}
\varphi_\ensuremath{\lambda}^j(x)
&= \ensuremath{\partial}_\ensuremath{\lambda} a_\ensuremath{\lambda}(x) K^j(x)
= - \brac{\ensuremath{\nabla} a \pr{\frac{x}{\ensuremath{\lambda}}}
\cdot \pr{\frac{x}{\ensuremath{\lambda}^2}}} K^j(x)
= -\brac{\ensuremath{\nabla} a \pr{\frac{x}{\ensuremath{\lambda}}}
\cdot \pr{\frac{x}{\ensuremath{\lambda}^2}}} \pr{\frac{1}{\ensuremath{\lambda}}}
K^j \pr{\frac{x}{\ensuremath{\lambda}}} \\
&= - \frac{1}{\ensuremath{\lambda}^2} \brac{\ensuremath{\nabla} a \pr{\frac{x}{\ensuremath{\lambda}}}
\cdot \pr{\frac{x}{\ensuremath{\lambda}}}} K^j \pr{\frac{x}{\ensuremath{\lambda}}}. \end{align*} We see that \begin{align*}
\varphi_1^j(x)
= \varphi^j(x) := -(\ensuremath{\nabla} a(x) \cdot x) K^j(x), \quad
\varphi_\ensuremath{\lambda}^j(x)
= \ensuremath{\lambda}^{-2} \varphi^j(\ensuremath{\lambda}^{-1} x). \end{align*} But, $\ensuremath{\nabla} a(x) \cdot x$ is radially symmetric and $K^j(x)$ integrates to zero over circles. Hence, $\varphi_\ensuremath{\lambda}^j$ is an approximation to the identity, but it has total mass zero. Hence, it is useless. \begin{align*}
\int_{\ensuremath{\BB{R}}^2} \varphi_j
= \int_{\ensuremath{\BB{R}}^2} a(x) \dv (K^j(x) x) \end{align*} } The reverse implication is more difficult. One approach is to start with the velocity identity, \begin{align}\label{e:CKId}
\begin{split}
\varphi_\ensuremath{\lambda} * (u^j(t) - (u^0)^j)
= - \int_0^t \pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp \brac{(1 - a_\ensuremath{\lambda}) K^j}}
\mathop{* \cdot} (u \otimes u)(s) \, ds,
\end{split} \end{align} where $\varphi_\ensuremath{\lambda} := \ensuremath{\nabla} a_\ensuremath{\lambda} \cdot K^\perp \in C_c^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)$. This identity can be shown to be equivalent to the Serfati identity by exploiting Lemma 4.4 of \cite{KBounded2015}. It turns out that $\varphi_\ensuremath{\lambda} *$ is a mollifier: taking $\ensuremath{\lambda} \to 0$ yields (after a long calculation) the velocity equation for the Euler equations in the limit.
Assuming Serfati's identity holds, then, introduces some degree of redundancy in \cref{D:ESol}, but this redundancy cannot be entirely eliminated if we wish to have uniqueness of solutions. This is demonstrated in \cite{KBounded2015}, where it is shown that for bounded vorticity, bounded velocity solutions, Serfati's identity must hold---up to the addition of a time-varying, constant-in-space vector field. This vector field, then, serves as the uniqueness criterion; its vanishing is equivalent to the sublinear growth of the pressure (used as the uniqueness criterion in \cite{TaniuchiEtAl2010}).
Bounded vorticity, bounded velocity solutions are a special case of the solutions we consider here, but the technology developed in \cite{KBounded2015} does not easily extend to velocities growing at infinity. Hence, we are unable to dispense with our assumption that the Serfati identity holds, as we need it as our uniqueness criterion.
Another closely related issue is that we are using Lagrangian solutions to the Euler equations, as we need to use the flow map in our uniqueness argument (with Serfati's identity). We note that it is not sufficient to simply know that the vorticity equation, $\ensuremath{\partial}_t \omega + u \cdot \ensuremath{\nabla} \omega = 0$, is satisfied. This tells us that the vorticity is transported, in a weak sense, by the unique flow map $X$, but we need that this weak transport equation has $\omega^0(X^{-1}(t, x))$ as its unique solution. Only then can we conclude that the curl of $u$ truly is $\omega^0(X^{-1}(t, x))$. (For bounded velocity, bounded vorticity solutions, the transport estimate from \cite{BahouriCheminDanchin2011} in \cref{P:f0XBound} would be enough to obtain this uniqueness.)
The usual way to establish well-posedness of Eulerian solutions to the 2D Euler equations is to construct Lagrangian solutions (which are automatically Eulerian) and then prove uniqueness using the Eulerian formulation only. Such an approach works for bounded vorticity, bounded velocity solutions, as uniqueness using the Eulerian formulation was shown in \cite{TaniuchiEtAl2010} (see also \cite{CK2016}). Whether this can be extended to the solutions we study here is a subject for future work.
\section{Properties of growth bounds\xspace}\label{S:GBs}
\noindent We establish in this section a number of properties of growth bounds\xspace.
\begin{definition}\label{D:Subadd}
We say that $f \colon [0, \ensuremath{\infty}) \to [0, \ensuremath{\infty})$ is \textit{subadditive\xspace}
if
\begin{align}\label{e:subadd}
f(r + s) \le
f(r) + f(s)
\text{ for all } r, s \ge 0.
\end{align}
\end{definition}
\begin{lemma}\label{L:hGood}
Let $h$ be a pre-growth bound\xspace. Then $h$ is subadditive, as is $h^2$ if $h$ is a well-posedness growth bound\xspace.
Also, $h(r) \le c r + d$ with $c = h'(0)$, $d = h(0)$,
and the analogous statement holds for $h^2$
when $h$ is a well-posedness growth bound\xspace. \end{lemma} \begin{proof}
Because $h$ is concave with $h(0) \ge 0$, we have
\begin{align*}
a h(x)
\le a h(x) + (1 - a) h(0)
\le h(ax + (1 - a) 0) = h(ax)
\end{align*}
for all $x > 0$ and $a\in[0,1]$. We apply this twice with $x = r + s > 0$
and $a = r/(r + s)$, giving
\begin{align*}
h(r + s)
&= a h(r + s) + (1 - a) h(r + s)
\le h \pr{a(r + s)} + h \pr{(1 - a) (r + s)}
= h(r) + h(s).
\end{align*}
Because $h'(0) < \ensuremath{\infty}$ and $h$ is concave
we also have $h(r) \le c r + d$.
The facts regarding $h^2$ follow in the same way. \end{proof}
\cref{L:EProp} gives the existence of the function $\mu$ promised in \cref{D:GrowthBound}. A $\mu$ that yields a tighter bound on $E \le \mu$ will result in a longer existence time estimate for solutions, as we can see from the application of \cref{L:NotOsgood} in the proof of \cref{T:Existence}. The estimate we give in \cref{L:EProp} is very loose; in specific cases, this bound can be much improved.
\begin{lemma}\label{L:EProp}
Let $h$ be a well-posedness growth bound\xspace. There exists a
continuous, convex function $\mu$ with $\mu(0) = 0$
for which $E \le \mu$, where $E$ is as in \cref{D:GrowthBound}. \end{lemma} \begin{proof}
Since $h(0) > 0$, $H(r) := H[h^2](r) \to \ensuremath{\infty}$
as $r \to 0^+$. Hence, L'Hospital's rule gives
\begin{align*}
\lim_{r \to 0^+} r H(r)
&= \lim_{r \to 0^+} \frac{H(r)}{r^{-1}}
= -\lim_{r \to 0^+} \frac{H'(r)}{r^{-2}}
= \lim_{r \to 0^+} \frac{h^2(r) r^{-2}}{r^{-2}}
= h^2(0).
\end{align*}
It
follows that $r^{\frac{1}{2}} H(r^{\frac{1}{2}}) \le C$ for $r \in (0, 1)$.
Then, since $H$ decreases, we see that
$(1 + r^{\frac{1}{2}} H(r^{\frac{1}{2}}))^2
\le (1 + C r^{\frac{1}{2}})^2
\le 2(1 + Cr) \le C(1 + r)$.
Hence, we can use $\mu(r) = Cr (1 + r)$. \end{proof}
\Ignore { \begin{lemma}\label{L:EquivOsgood}
The condition on $h$ in \cref{e:omuOsgoodAtInfinity} is equivalent to
\begin{align*}
I
:= \int_1^\ensuremath{\infty} \frac{dr}
{\pr{1 + r^{\frac{1}{2}} H \pr{r^{\frac{1}{2}}}}^2 r}
= \ensuremath{\infty}.
\end{align*} \end{lemma} \begin{proof}
Making the change of variables, $r = s^2$, so that
$dr/r = 2 s \, ds/r = 2 s \, ds/s^2 = 2 \, ds/s$, gives
\begin{align*}
I
:= 2 \int_1^\ensuremath{\infty} \frac{ds}
{\pr{1 + s H(s)}^2 s}.
\end{align*} \end{proof} }
\begin{remark}\label{R:hSubadditive}
Abusing notation, we will often write $h(x)$ for $h(\abs{x})$,
treating $h$ as a map from $\ensuremath{\BB{R}}^2$ to $\ensuremath{\BB{R}}$. Treated this way,
$h$ remains subadditive in the sense that
\begin{align*}
h(y)
= h(\abs{y})
\le h(\abs{x - y} + \abs{x})
\le h(\abs{x - y})+ h(\abs{x})
= h(x - y) + h(x).
\end{align*}
Here, we used
the triangle inequality and that $h \colon [0, \ensuremath{\infty}) \to (0, \ensuremath{\infty})$
is increasing and subadditive.
Similarly,
$
\abs{h(y) - h(x)} \le h(x - y).
$ \end{remark}
\begin{lemma}\label{L:preGB}
Let $h$ be a pre-growth bound\xspace as in \cref{D:GrowthBound}.
Then for all $a \ge 1$ and $r \ge 0$,
\begin{align}\label{e:flar}
&h(a r)
\le 2 a h(r)
\end{align}
and
\begin{align}\label{e:flarflar}
\begin{split}
&h(a h(r))
\le C(h) a h(r), \\
&h(h(r))/h(r) \le C(h)
\end{split}
\end{align}
where $C(h) = 2 (h'(0) + h(h(0))/h(0)$). \end{lemma} \begin{proof}
To prove \cref{e:flar}, we will show first that
for any positive integer $n$ and any $r \ge 0$,
\begin{align}\label{e:fn}
h(n r)
\le n h(r).
\end{align}
For $n = 1$, \cref{e:fn} trivially holds,
so assume that \cref{e:fn} holds for $n - 1 \ge 1$.
Then because $h$ is subadditive (\cref{L:hGood}),
\begin{align*}
h(n r)
&= h((n - 1) r + r)
\le h((n - 1)r) + h(r)
\le (n - 1) h(r) + h(r)
= n h(r).
\end{align*}
Thus, \cref{e:fn} follows for all positive integers
$n$ by induction.
If $a = n + \ensuremath{\alpha}$ for
some $\ensuremath{\alpha} \in [0, 1)$ then
\begin{align*}
h(a r)
&= h(nr + \ensuremath{\alpha} r)
\le h(nr) + h(\ensuremath{\alpha} r)
\le n h(r) + h(r)
= (n + 1) h(r) \\
&= \frac{n + 1}{n + \ensuremath{\alpha}} (n + \ensuremath{\alpha}) h(r)
= \frac{n + 1}{n + \ensuremath{\alpha}} a h(r)
\le \sup_{n \ge 1} \brac{\frac{n + 1}{n + \ensuremath{\alpha}}} a h(r)
\leq 2 a h(r).
\end{align*}
(Note that the supremum is over $n \ge 1$ since we assumed
that $a \ge 1$.)
For \cref{e:flarflar}$_1$, let $c = h'(0)$, $d = h(0)$ as in
\cref{L:hGood}, so that $h(r) \le cr + d$. Then,
\begin{align*}
h(a h(r))
&\le h(a c r + a d)
\le h(a c r) + h(a d)
\le 2a (c h(r) + h(d))
\le 2\pr{c + \frac{h(d)}{h(0)}} a h(r),
\end{align*}
which is \cref{e:flarflar}$_1$. From this,
\cref{e:flarflar}$_2$ follows immediately. \end{proof}
In \cref{S:Uniqueness} we will employ the functions, $\Gamma_t, F_t \colon [0, \ensuremath{\infty}) \to (0, \ensuremath{\infty})$, defined for any $t \in [0, T]$ in terms of an arbitrary growth bound\xspace $h$ by
\begin{align}\label{e:GammaDef}
\int_{a}^{\Gamma_t(a)} \frac{dr}{h(r)}
= C t,
\end{align}
\begin{align}\label{e:FDef}
F_t(a)
= F_t[h](a)
:= h(\Gamma_t(a)).
\end{align}
We know that
$\Gamma_t$ and so $F_t$ are well-defined, because
\begin{align}\label{e:hOsgoodAtInfinity}
\int_1^\ensuremath{\infty} \frac{dr}{h(r)}
\ge \int_1^\ensuremath{\infty} \frac{1}{c r + d} \, dr
= \ensuremath{\infty},
\end{align}
recalling that $h(r) \le c r + d$ by \cref{L:hGood}.
\begin{remark}
If $h(0)$ were zero, then $\Gamma_t$ would be the bound at time $t$ on the
spatial modulus of continuity\xspace of the flow map for a velocity field having $h$ as its modulus of continuity\xspace.
Much is known about properties of $\Gamma_t$
(they are explored at length in \cite{KFlow}), and most of these properties
are unaffected by $h(0)$ being positive. One key difference, however,
is that $\Gamma_t(0) > 0$ and $\Gamma_t'(0) < \ensuremath{\infty}$.
As we will see in \cref{L:FProperties},
this implies that $\Gamma_t$ is subadditive\xspace.
This is in contrast to what happens when $h(0) = 0$, where
$\Gamma_t(0) = 0$, $\Gamma_t'(0) = \ensuremath{\infty}$, and $\Gamma$
satisfies the Osgood condition. \end{remark}
\cref{L:FProperties} shows that $F_t$ is a growth bound\xspace that is equivalent to $h$ in that it is bounded above and below by constant multiples of $h$.
\begin{lemma}\label{L:FProperties}
Assume that $h$ is a (well-posedness, global well-posedness) growth bound\xspace and define $F_t$ as in \cref{e:FDef}.
For all $t \in [0, T]$, $F_t$ is a (well-posedness, global well-posedness) growth bound\xspace
as in \cref{D:GrowthBound}. Moreover,
$F_t(r)$ is increasing in $t$ and $r$ with
$h \le F_t \le C(t) h$, $C(t)$ increasing with time. \end{lemma} \begin{proof}
First observe that
$
\Gamma_t'(r)
= h(\Gamma_t(r))/h(r)
$
follows from differentiating both sides of \cref{e:GammaDef}.
Thus, $\Gamma_t$ is increasing and continuously differentiable
on $(0, \ensuremath{\infty})$. Since $\Gamma_t'(0) = h(\Gamma_t(0))/h(0) \ge 1$
is finite,
$\Gamma_t$ is, in fact, differentiable on $[0, \ensuremath{\infty})$.
Also,
$F_t'(0) = h'(\Gamma_t(0)) \Gamma_t'(0)$ is finite and hence
so is $(F_t^2)'(0)$, meaning that $F_t$ and $F_t^2$ are differentiable
on all of $[0, \ensuremath{\infty})$.
We now show that $F_t$ is increasing, concave,
and twice differentiable on $(0, \ensuremath{\infty})$, and that the
same holds true for $F_t^2$ if $h$ is a well-posedness growth bound\xspace. We do this explicitly for
$F_t^2$,
the proof for $F_t$ being slightly simpler.
Direct calculation gives
\begin{align*}
(F_t^2)'(r)
&= (h^2)'(\Gamma_t(r)) \Gamma_t'(r), \quad
(F_t^2)''(r)
= (h^2)''(\Gamma_t(r)) (\Gamma_t'(r))^2
+ (h^2)'(\Gamma_t(r)) \Gamma_t''(r).
\end{align*}
But $(h^2)' \ge 0$ and $(h^2)'' \le 0$ because $h^2$ is
increasing and concave.
Also, $h$ concave implies that $\Gamma_t$ is concave: this is classical
(see Lemma 8.3 of \cite{KFlow} for a proof).
Hence, $\Gamma_t'' \le 0$, and we conclude that $(F_t^2)'' \le 0$,
meaning that $F_t^2$ is concave.
We now show that $h \le F_t \le C(t) h$.
We have $F_t(r) = h(\Gamma_t(r)) \ge h(r)$
since $\Gamma_t(r) \ge r$ and $h$ is increasing.
Because $F_t$ is concave it is sublinear,
so $F_t(r) \le c'r + d'$ for some $c', d'$ increasing in time.
Hence,
\begin{align*}
F_t(r)
&= h(\Gamma_t(r))
\le h(c' r + d')
\le 2 c' h(r) + h(d')
\le (2 c' + h(d')/h(0)) h(r).
\end{align*}
Here, we used \cref{L:preGB} (we increase $c'$ so that $c' \ge 1$
if necessary) and that $h$ increasing
gives $h(d') \le (h(d')/h(0)) h(r)$. Hence,
$h \le F_t \le C(t) h$, with $C(t)$ increasing with time.
Finally, if $h$ is a global well-posedness growth bound\xspace then $C(t) \mu$ serves as
a bound on the function $E$ of \cref{D:GrowthBound} for $F_t$. \end{proof}
\Ignore{ \begin{lemma}\label{L:SubaddLowerBound}
Let $h$ be a growth bound\xspace. We have,
\begin{align*}
R[h]
:= \sup_{x, y \in \ensuremath{\BB{R}}^2} \frac{h(x) + h(y)}{h(x + y)}
= 2.
\end{align*} \end{lemma} \begin{proof}
We can re-express $R[h]$ as
\begin{align*}
R[h]
= \sup_{r > 0} \sup_{s \in [0, r]} f_r(s), \qquad
f_r(s)
:= \frac{h(r - s) + h(s)}{h(r)}.
\end{align*}
Now, there is only one internal extremum of $f_r$, which occurs when
\begin{align*}
0
&= f_r's(s)
= \frac{-h'(r - s) + h'(s)}{h(r)}
\iff
h'(r - s) = h'(s)
\iff s = \frac{r}{2},
\end{align*}
the last implication holding since $h'$ is monotone, since $h$ is
concave. \ToDo{If $h'$ is not strictly concave then it could be
that $h'$ is constant in a neighborhood of $r = s/2$.
One solution is to require strict concavity in our growth bounds\xspace,
which would do no harm. Return to this.}
Hence,
\begin{align*}
R[h]
= \sup_{r > 0} f_r\pr{\frac{r}{2}}
= 2 \sup_{r > 0} \frac{h(r)}{h(2r)}
= 2.
\end{align*}
\end{proof} }
\begin{comment} Mostly, we will need only upper bounds on pre-growth bounds\xspace, as given in \cref{L:preGB} and through it, \cref{L:FProperties}. We will, however, find use for the one lower bound in \cref{L:preGBLowerBound}.
\begin{lemma}\label{L:preGBLowerBound}
Let $h$ be a pre-growth bound\xspace. There exists a constant $C(h)$ such that
for all $x \in \ensuremath{\BB{R}}^2$,
\begin{align*}
\frac{h(x)}{h(x - h(x))}
\le C(h).
\end{align*} \end{lemma} \begin{proof}
Because $h$ is subadditive (and see \cref{R:hSubadditive})
and using \cref{L:FProperties},
\begin{align*}
h(x)
\le h(x - h(x)) + h(h(x))
\le h(x - h(x)) + C(h) h(x).
\end{align*}
Dividing both sides by $h(x - h(x))$ gives the result. \end{proof} \end{comment}
\begin{lemma}\label{L:fhzetaacts}
Assume that $h$ is a growth bound\xspace and let $g := 1/h$. Then
$g$ is a decreasing convex function; in particular,
$\abs{g'}$ is decreasing. Moreover
\begin{align*}
\abs{g'} \le c_0 g, \quad
c_0 := h'(0)/h(0).
\end{align*}
\end{lemma} \begin{proof}
We have
\begin{align*}
g'(r)
= - \frac{h'(r)}{h(r)^2}
< 0
\end{align*}
and
\begin{align*}
g''(r)
= -\pr{\frac{h'(r)}{h^2(r)}}'
= \frac{ 2 h(r) (h'(r))^2 - h^2(r) h''(x)}
{h^4(r)}
\ge 0,
\end{align*}
since $h > 0$ and $h'' \le 0$.
Thus, $g$ is a decreasing convex function. Then, because
$g'$ is negative but increasing, $\abs{g'}$ is decreasing.
Finally,
\begin{align*}
\frac{\abs{g'(r)}}{g(r)}
= \frac{h'(r)/h^2(r)}{1/h(r)}
= \frac{h'(r)}{h(r)}
= (\log h(r))'
\end{align*}
is decreasing, since $\log$ is concave and $h$ is concave so
$\log h$ is concave. Therefore,
\begin{align*}
\abs{g'(r)} \le (\log h)'(0) g(r).
\end{align*} \end{proof}
\Ignore{ \begin{lemma}
The function $M$ in \cref{D:GrowthBound} is convex with $M(0) = 0$. \end{lemma} \begin{proof}
That $M(0) = 0$ is immediate. To show convexity, we work with the function,
$M(r^2)$,
calculating its derivatives in two different ways.
First, we have
\begin{align*}
(M(r^2))'
&= 2 r M'(r^2), \\
(M(r^2))''
&= 2 M'(r^2) + 4 r^2 M''(r^2).
\end{align*}
But,
\begin{align*}
M(r^2) = (1 + r H(r))^2 r^2,
\end{align*}
so also,
\begin{align*}
(M(r^2))'
&= (1 + r H(r))^2 \, 2r
+ 2 (1 + r H(r)) (H(r) + r H'(r)) r^2 \\
&= 2 r (1 + r H(r))
\pr{1 + r H(r) + r (H(r) + r H'(r))
} \\
&= 2 (r + r^2 H(r))
\pr{1 + 2 r H(r) + r^2 H'(r)
} \\
(M(r^2))''
&= 2 (1 + 2 r H(r) + r^2 H'(r))
\pr{1 + 2 r H(r) + r^2 H'(r)
} \\
&\qquad
+ 2 (r + r^2 H(r))
\pr{2 H(r) + 2 r H'(r) + 2 r H'(r) + r^2 H''(r)
} \\
&= 2 (1 + 2 r H(r) + r^2 H'(r))^2
+ 2 (r + r^2 H(r))
\pr{2 H(r) + 4 r H'(r) + r^2 H''(r)
}.
\end{align*}
But,
\begin{align*}
H'(r)
&= \frac{h(r)}{r^2}, \\
H''(r)
&= - 2 \frac{h(r)}{r^2} + \frac{h'(r)}{r^2}
\end{align*}
so
\begin{align*}
2 H(r) + 4 r H'(r) + r^2 H''(r)
= 2 H(r) - 8 \frac{h(r)}{r} - 2 h(r) + h'(r)
\end{align*} \end{proof} }
We will also need the properties of $\ol{\mu}$ (defined in \cref{e:omu}) given in \cref{L:omuSubAdditive}.
\begin{lemma}\label{L:omuSubAdditive}
For all $r \ge 0$ and $a \in [0,1]$,
$
a \ol{\mu}(r)
\le \ol{\mu}(a r)
$. \end{lemma} \begin{proof}
As in the proof of \cref{L:hGood},
because $\ol{\mu}$ is concave with $\ol{\mu}(0) = 0$, we have
$a \ol{\mu}(r) = a \ol{\mu}(r) + (1 - a) \ol{\mu}(0)
\le \ol{\mu}(a r + (1 - a) 0) = \ol{\mu}(a r)$
for all $r \ge 0$ and $a \in [0,1]$. \end{proof}
\begin{remark}\label{R:mu}
Similarly, $\mu$ of \cref{D:GrowthBound} (iv) satisfies
$
\mu(a r)
\le a \mu(r)
$
for all $r \ge 0$ and $a\in[0,1]$. \end{remark}
\section{Biot-Savart law and locally log-Lipschitz velocity fields}\label{S:BSLaw}
\noindent \Ignore{ \begin{lemma}\label{L:muLemma}
For all $a, x > 0$,
\begin{align*}
\ol{\mu}(a x) \le a \ol{\mu}(x) + x \ol{\mu}(a),
\end{align*}
with equality holding if and only if
$a, x \le e^{-1}$. \end{lemma} \begin{proof}
We have
\begin{align*}
\ol{\mu}(a x)
&= - a x \log(a x) = - a x (\log a + \log x)
= - x a \log a - a x \log x.
\end{align*}
But for all $y > 0$, $-y \log y \le \mu(y)$ so, in fact,
\begin{align*}
\ol{\mu}(a x)
\le x \ol{\mu}(a) + a \ol{\mu}(x)
\end{align*}
as long as $ax \le e^{-1}$.
If $ax > e^{-1}$, so that $\ol{\mu}(ax) = e^{-1}$, then we treat cases.
\Case{1}{$x < e^{-1}$}
Then
\begin{align*}
a \ol{\mu}(x) + x \ol{\mu}(a)
\ge - a x \log x
\ge ax
> e^{-1}
= \ol{\mu}(ax),
\end{align*}
since $-x \log x > x$ for all $x < e^{-1}$.
\Case{2}{$a < e^{-1}$}
Holds as in Case 1 by switching the roles of $a$ and $x$.
\Case{3}{$a, x > e^{-1}$}
Then $\ol{\mu}(a) = \ol{\mu}(x) = e^{-1}$.
\Case{3a}{$ax \le e^{-1}$}
\Case{3b}{$ax > e^{-1}$}
Then $a \ol{\mu}(x) + x \ol{\mu}(a) = (a + x) e^{-1}$. Subject
to the constraint that $ax = b > e^{-1}$, the sum $a + x$
is minimized then $a = x = \sqrt{b} > e^{-1/2}$.
Thus, $(a + x) e^{-1} > 2 e^{-1/2} e^{-1} > e^{-1}
= \ol{\mu}(ax)$.
Observe that equality holds if and only if $a, x \le e^{-1}$. \end{proof}
\Ignore { \begin{lemma}
Assume that $x \le e^{-1}$. \end{lemma} \begin{proof}
In all cases, $\ol{\mu}(x) = - x \log x$.
\Case{1}{$a \le e^{-1}$}
Then $\ol{\mu}(a x) = a \ol{\mu}(x) + x \ol{\mu}(a)$ by \cref{L:muLemma}.
\Case{2}{$e^{-1} < a < (xe)^{-1}$}
Then $ax \le e^{-1}$ so
\begin{align*}
\ol{\mu}(ax)
= - ax \log(ax)
= - a x \log x - x a \log a
< a \ol{\mu}(x) + e^{-1} x.
\end{align*}
\Case{3}{$a \ge (xe)^{-1}$}
Then $ax \ge e^{-1}$ so
$
\ol{\mu}(ax)
= e^{-1}.
$ \end{proof} } }
\begin{prop}\label{P:KBounds}
Let $a_\ensuremath{\lambda}$ be as in \cref{D:Radial}.
There exists $C > 0$ such that, for all $x \in \ensuremath{\BB{R}}^2$
and all $\lambda > 0$ we have,
\begin{align}
\norm{a_\lambda(x - y) K(x - y)}_{L^1_y(\ensuremath{\BB{R}}^2)}
&\le C \lambda.
\label{e:aKBound}
\end{align}
Let $U \subseteq \ensuremath{\BB{R}}^2$ have Lebesgue measure $\abs{U}$.
Then for any $p$ in $[1, 2)$,
\begin{align}\label{e:RearrangementBounds}
\begin{split}
\smallnorm{K(x - \cdot)}_{L^p(U)}^p
\le (2 \pi (2 - p))^{p - 2}
\abs{U}^{1 - \frac{p}{2}}.
\end{split}
\end{align} \end{prop} \begin{proof}
See Propositions 3.1 and 3.2 of \cite{AKLN2015}. \end{proof}
\begin{prop}\label{P:gradgradaKBound}
There exists $C > 0$ such that for all $\ensuremath{\lambda} > 0$,
\begin{align*}
\abs{\ensuremath{\nabla} \ensuremath{\nabla}^\perp((1 - a_\ensuremath{\lambda}) K)(x)}
\le \frac{C}{\abs{x}^3}
\CharFunc_{B_{\ensuremath{\lambda}/2}(0)^C}.
\end{align*} \end{prop} \begin{proof}
Using $\ensuremath{\nabla} \ensuremath{\nabla} (fg) = f \ensuremath{\nabla} \ensuremath{\nabla} g
+ g \ensuremath{\nabla} \ensuremath{\nabla} f + 2 \ensuremath{\nabla} f \otimes \ensuremath{\nabla} g$, we have
\begin{align*}
\ensuremath{\nabla} \ensuremath{\nabla}^\perp((1 - a_\ensuremath{\lambda}) K)
= (1 - a_\ensuremath{\lambda}) \ensuremath{\nabla} \ensuremath{\nabla} K
+ K \ensuremath{\nabla} \ensuremath{\nabla} a_\ensuremath{\lambda}
- 2 \ensuremath{\nabla} a_\ensuremath{\lambda} \otimes \ensuremath{\nabla} K.
\end{align*}
But, $\abs{K(x)} \le (2 \pi)^{-1} \abs{x}^{-1}$,
$\abs{\ensuremath{\nabla} K(x)} \le (2 \pi)^{-1} \abs{x}^{-2}$,
$\abs{\ensuremath{\nabla} \ensuremath{\nabla} K(x)} \le (4 \pi)^{-1}
\abs{x}^{-3}$, and
\begin{align*}
\abs{\ensuremath{\nabla} a_\ensuremath{\lambda}(x)}
&= \frac{1}{\ensuremath{\lambda}} \abs{\ensuremath{\nabla} a(\ensuremath{\lambda}^{-1} x)}
\le \frac{C}{\ensuremath{\lambda}}
\CharFunc_{B_{\ensuremath{\lambda}}(0) \setminus
B_{\ensuremath{\lambda}/2}(0)}(\ensuremath{\lambda}^{-1} x)
\le \frac{C}{\abs{x}}
\CharFunc_{B_{\ensuremath{\lambda}/2}(0)^C}, \\
\abs{\ensuremath{\nabla} \ensuremath{\nabla} a_\ensuremath{\lambda}(x)}
&= \frac{1}{\ensuremath{\lambda}^2}
\abs{\ensuremath{\nabla} \ensuremath{\nabla} a(\ensuremath{\lambda}^{-1} x)}
\le \frac{C}{\ensuremath{\lambda}^2}
\CharFunc_{B_{\ensuremath{\lambda}}(0) \setminus
B_{\ensuremath{\lambda}/2}(0)}(\ensuremath{\lambda}^{-1} x)
\le \frac{C}{\abs{x}^2}
\CharFunc_{B_{\ensuremath{\lambda}/2}(0)^C},
\end{align*}
from which the result follows. \end{proof}
\cref{P:hlogBound} is a refinement of Proposition 3.3 of \cite{AKLN2015} that better accounts for the effect of the measure of $U$.
\begin{prop}\label{P:hlogBound}
Let $X_1$ and $X_2$ be measure-preserving homeomorphisms of $\ensuremath{\BB{R}}^2$.
Let $U \subset \ensuremath{\BB{R}}^2$ have finite measure and
assume
that $\delta := \norm{X_1 - X_2}_{L^\ensuremath{\infty}(U)} < \ensuremath{\infty}$. Then
for any $x \in \ensuremath{\BB{R}}^2$,
\begin{align}\label{e:KX1X2Diff}
\begin{split}
\smallnorm{K(x - X_1(z)) - K(x - X_2(z))}_{L^1_z(U)}
&\le C R \ol{\mu}(\delta/R)
\end{split}
\end{align}
where $R = (2 \pi)^{-1/2}\abs{U}^{1/2}$. \end{prop} \begin{proof}
As in the proof of Proposition 3.3 of \cite{AKLN2015}, we have
\begin{align*}
\norm{K(x-X_1(z)) - K(x-X_2(z))}_{L^1_z(U)}
\le Cp R^{\frac{1}{p}} \delta^{1 - \frac{1}{p}}.
\end{align*}
In \cite{AKLN2015}, $R^{\frac{1}{p}}$ was bounded above by $\max \set{1, R}$,
which gave $p = - \log \delta$ as the minimizer of the norm
as long as $\delta < e^{-1}$. Keeping
the factor of $R^{\frac{1}{p}}$ we see that the minimum occurs when
$p = - \log(\delta/R)$ as long as $\delta \le e^{-1} R$, the minimum value being
\begin{align*}
- C \delta &\log(\delta/R)
R^{-\frac{1}{\log(\delta/R)}}
\delta^{\frac{1}{\log(\delta/R)}}
= - C \delta \log(\delta/R)
e^{-\frac{\log R}{\log(\delta/R)}}
e^{\frac{\log \delta}{\log(\delta/R)}}
= - C e^{-1} \delta \log(\delta/R) \\
&= C R \ol{\mu}(\delta/R).
\end{align*}
This gives the bound for $\delta \le e^{-1} R$;
the $\delta > e^{-1} R$ bound follows immediately from
\cref{e:RearrangementBounds} with $p = 1$. \end{proof}
In \cref{P:Morrey}, we establish a bound on the modulus of continuity\xspace of $u \in S_h$. In Lemma 2.8 of \cite{ElgindiJeong2016}, the authors obtain the same bound as in \cref{P:Morrey} for $h(x) = 1 + \abs{x}$, but under the assumption that the velocity field can be obtained from the vorticity via a symmetrized Biot-Savart law (which they show applies to $m$-fold symmetric vorticities for $m \ge 3$, but which does not apply for our unbounded velocities).
\begin{prop}\label{P:Morrey}
Let $h$ be a pre-growth bound\xspace.
Then for all $x, y \in \ensuremath{\BB{R}}^2$ such that
$\abs{y} \le C(1 + \abs{x})$ for some constant $C > 0$, we have,
for all $u \in S_h$,
\begin{align*}
\abs{u(x + y) - u(x)}
\le
C \norm{u}_{S_h } h(x) \ol{\mu} \pr{\frac{\abs{y}}{h(x)}}.
\end{align*}
If $h \equiv C$, we need no restriction on $\abs{y}$. \end{prop} \begin{proof} Fix $x \in \ensuremath{\BB{R}}^2$ and let $\psi$ be the stream function for $u$ on $\ensuremath{\BB{R}}^2$ chosen so that $\psi(x) = 0$. Let $\phi = a_2$, so that $\supp \phi \subseteq \overline{B_{2}(0)}$ with $\phi \equiv 1$ on $\overline{B_{1}(0)}$ and let $\phi_{x, R}(y) := \phi(R^{-1} (y - x))$ for any $R > 0$. Let $\overline{u} = \ensuremath{\nabla}^\perp(\phi_{x, R} \psi)$ and let $\overline{\omega} = \curl \overline{u}$.
Applying Morrey's inequality gives, for any $y$ with $\abs{y} \le R$, and any $p > 2$, \begin{align}\label{e:uuMorrey}
\abs{u(x + y) - u(x)}
= \abs{\overline{u}(x + y) - \overline{u}(x)}
\le C \norm{\ensuremath{\nabla} \overline{u}}_{L^p(\ensuremath{\BB{R}}^2)}
\abs{y}^{1 - \frac{2}{p}}. \end{align} Because $\overline{\omega}$ is compactly supported, $\overline{u} = K * \overline{\omega}$. Thus, we can apply the Calderon-Zygmund inequality to obtain \begin{align*}
&\abs{u(x + y) - u(x)}
\le C \inf_{p > 2} \set{p \norm{\overline{\omega}}_{L^p(\ensuremath{\BB{R}}^2)} \abs{y}^{1 - \frac{2}{p}}} \\
&\qquad
= C \inf_{p > 2} \set{p \norm{\overline{\omega}}_{L^p(B_{2 R}(x))} \abs{y}^{1 - \frac{2}{p}}}
\le C \norm{\overline{\omega}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
\inf_{p > 2} \set{R^{2/p} p \abs{y}^{1 - \frac{2}{p}}} \\
&\qquad
= C \abs{y} \norm{\overline{\omega}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
\inf_{p > 2} \set{p \abs{R^{-1} y}^{- \frac{2}{p}}}. \end{align*} When $R^{-1} \abs{y} \le e^{-1}$ (meaning also that $\abs{y} \le R$, as required), the infimum occurs at $p = - 2 \log(\abs{R^{-1} y})$ and we have \begin{align*}
&\abs{u(x + y) - u(x)}
\le - C \norm{\overline{\omega}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)} \abs{y} \log \abs{R^{-1} y}. \end{align*}
Having minimized over $p$ for a fixed $R$, we must now choose $R$. First observe that $\overline{\omega} = \Delta (\phi_{x, R} \psi) = \phi_{x, R} \Delta \psi + \Delta \phi_{x, R} \psi + 2 \ensuremath{\nabla} \phi_{x, R} \cdot \ensuremath{\nabla} \psi = \phi_{x, R} \omega + \Delta \phi_{x, R} \psi + 2 \ensuremath{\nabla}^{\perp} \phi_{x, R} \cdot u$. Also, for all $z \in B_{2R}(x)$ \begin{align*}
\abs{\psi(z)}
\le \int_{\abs{x}}^{\abs{x} + 2R} \norm{g u}_{L^\ensuremath{\infty}} h(r) \, dr
\le \norm{g u}_{L^\ensuremath{\infty}} R h(\abs{x} + 2R), \end{align*} where $g := 1/h$. Hence, \begin{align}\label{e:olomega}
\begin{split}
\norm{\overline{\omega}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
&\le \norm{\phi_{x, R}}_{L^\ensuremath{\infty}} \smallnorm{\omega}_{L^\ensuremath{\infty}(B_{2 R}(x))}
+ 2 \smallnorm{\ensuremath{\nabla}^\perp \phi_{x, R}}_{L^\ensuremath{\infty}}
\smallnorm{u}_{L^\ensuremath{\infty}(B_{2 R}(x))} \\
&\qquad
+ \norm{g u}_{L^\ensuremath{\infty}} h(\abs{x} + 2R)
R \smallnorm{\Delta \phi_{x, R}}_{L^\ensuremath{\infty}(B_{2 R}(x))} \\
&\le \smallnorm{\omega}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
+ C R^{-1} h(\abs{x} + 2 R)
\smallnorm{gu}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
\end{split} \end{align} so \begin{align*}
&\abs{u(x + y) - u(x)}
\le - C \brac{\smallnorm{\omega}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
+ R^{-1} h(\abs{x} + 2 R) \smallnorm{gu}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}}
\abs{y} \log \abs{R^{-1} y}. \end{align*}
Now choose $R = h(x)$. Then \begin{align*}
R^{-1} h&(\abs{x} + 2 R)
= g(x) h(\abs{x} + 2h(x))
\le C g(x) h(\abs{x}) + h(2h(x)) \\
&\le C(1 + C g(x) h(h(x)))
\le C, \end{align*} where we used the subadditivity of $h$ (see \cref{L:hGood}) and \cref{L:preGB}. Hence, we have, for $\abs{y} \le e^{-1} h(x)$, \begin{align*}
&\abs{u(x + y) - u(x)}
\le - C \norm{u}_{S_h}
\abs{y} (\log \abs{y} + \log g(x)) \\
&\qquad
= C \norm{u}_{S_h} \abs{y} \log \pr{\frac{h(x)}{\abs{y}}}
= C \norm{u}_{S_h} h(x) \ol{\mu} \pr{\frac{\abs{y}}{h(x)}}, \end{align*} the last equality holding as long as $\abs{y} \le h(x) e^{-1} = e^{-1} R \le R$.
If $\abs{y} \ge e^{-1} R$, then $\ol{\mu} \pr{\frac{\abs{y}}{h(x)}} = e^{-1}$, and \begin{align*}
\abs{u(x + y) - u(x)}
&\le C \norm{u}_{S_h} (h(x) + h(x + y))
\le C \norm{u}_{S_h} h(x) \\
&= C \norm{u}_{S_h} h(x) \ol{\mu} \pr{\frac{\abs{y}}{h(x)}}, \end{align*} where we applied \cref{L:preGB}, using $\abs{y} \le C(1 + \abs{x})$. Note that if $h \equiv C$, however, we need no restriction on $\abs{y}$ to reach this conclusion, since $h(x) = h(x + y) = C$. \end{proof}
\Ignore{ \begin{remark}\label{R:LL}
In \cref{S:ContDep,S:LP} we will make much use of $S_1$.
As we can see from \cref{P:Morrey}, any $u \in S_1$
also lies in the space $LL(\ensuremath{\BB{R}}^2)$ of log-Lipschitz
functions---those functions having norm
\begin{align*}
\norm{u}_{LL}
:= \norm{u}_{L^\ensuremath{\infty}}
+ \sup_{\substack{x, y \in \ensuremath{\BB{R}}^2 \\ x \ne y}}
\frac{\abs{u(x) - u(y)}}{\ol{\mu}(\abs{x - y})}.
\end{align*} \end{remark} }
In proving uniqueness in \cref{S:Uniqueness}, we will need to bound the term in the Serfati identity \cref{e:SerfatiId} coming from a convolution of the difference between two vorticities. Since the vorticities have no assumed regularity, we will need to rearrange the convolution so as to use an estimate on the Biot-Savart kernel that involves the difference of the flow maps, as in \cref{P:olgKBound}. This proposition is a refinement of Proposition 6.2 of \cite{AKLN2015} that better accounts for the effect of the parameter $\ensuremath{\lambda}$ in the cutoff function $a_\ensuremath{\lambda}$. Note that although we assume the solutions lie in some $S_h$ space, $h$ does not appear directly in the estimates, rather it appears indirectly via the value of $\delta(t)$, as one can see in the application of the proposition.
\begin{prop}\label{P:olgKBound}
Let $X_1$ and $X_2$ be measure-preserving homeomorphisms of $\ensuremath{\BB{R}}^2$
and let $\omega^0 \in L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)$.
Fix $x \in \ensuremath{\BB{R}}^2$ and $\ensuremath{\lambda} > 0$.
Let $V = \supp a_\ensuremath{\lambda}(X_1(s, x) - X_1(s, \cdot))
\cup \supp a_\ensuremath{\lambda}(X_1(s, x) - X_2(s, \cdot))$ and assume that
\begin{align}\label{e:deltaVDef}
\delta(t)
&:= \norm{X_1(t, \cdot) - X_2(t, \cdot)}_{L^\ensuremath{\infty}(V)}
< \ensuremath{\infty}.
\end{align}
Then we have
\begin{align*}
\abs{\int (a_\ensuremath{\lambda} K(X_1(s, x) - X_1(s, y))
- a_\ensuremath{\lambda} K(X_1(s, x) - X_2(s, y))) \omega^0(y) \, dy}
\le C \smallnorm{\omega^0}_{L^\ensuremath{\infty}}
\ensuremath{\lambda} \ol{\mu}(\delta(t)/\ensuremath{\lambda}).
\end{align*}
The constant, $C$, depends only on the Lipschitz constant of $a$. \end{prop} \begin{proof}
We have,
\begin{align*}
\int &(a_\ensuremath{\lambda} K(X_1(s, x) - X_1(s, y))
- a_\ensuremath{\lambda} K(X_1(s, x) - X_2(s, y))) \omega^0(y) \, dy
= I_1 + I_2,
\end{align*}
where
\begin{align*}
I_1
&:= \int a_\ensuremath{\lambda}(X_1(s, x) - X_1(s, y))
\pr{K(X_1(s, x) - X_1(s, y)) - K(X_1(s, x) - X_2(s, y))}
\omega^0(y) \, dy, \\
I_2
&:= \int \pr{a_\ensuremath{\lambda} (X_1(s, x) - X_1(s, y))
- a_\ensuremath{\lambda}(X_1(s, x) - X_2(s, y))}
K(X_1(s, x) - X_2(s, y)) \omega^0(y) \, dy.
\end{align*}
To bound $I_1$, let $U = \supp a_\ensuremath{\lambda}(X_1(s, x) - X_1(s, \cdot)) \subseteq V$,
which we note
has measure $4 \pi \ensuremath{\lambda}^2$ independently of $x$. Then
\begin{align*}
\abs{I_1}
&\le \norm{K(X_1(s, x) - X_1(s, y)) - K(X_1(s, x) - X_2(s, y))}
_{L^1_y(U)}
\smallnorm{\omega^0}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)} \\
&\le C \ensuremath{\lambda} \smallnorm{\omega^0}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)} \ol{\mu}(\delta(t)/\ensuremath{\lambda}).
\end{align*}
Here, we applied \cref{P:hlogBound} at the point $X_1(s, x)$.
For $I_2$,
we have,
\begin{align*}
\abs{I_2}
&\le \int \abs{\pr{a_\ensuremath{\lambda} (X_1(s, x) - X_1(s, y))
- a_\ensuremath{\lambda}(X_1(s, x) - X_2(s, y))}
K(X_1(s, x) - X_2(s, y)) \omega^0(y)} \, dy \\
&\le \frac{C}{\ensuremath{\lambda}}
\int_V \abs{X_1(s, y)) - X_2(s, y))}
\abs{K(X_1(s, x) - X_2(s, y))} \smallabs{\omega^0(y)} \, dy \\
&\le \frac{C}{\ensuremath{\lambda}} \smallnorm{\omega^0}_{L^\ensuremath{\infty}} \delta(t)
\int_V
\abs{K(X_1(s, x) - X_2(s, y))}\, dy
\le C \smallnorm{\omega^0}_{L^\ensuremath{\infty}} \delta(t).
\end{align*}
Here, we used \cref{e:RearrangementBounds} with $p = 1$
and that the Lipschitz constant
of $a_\ensuremath{\lambda}$ is $C \ensuremath{\lambda}^{-1}$. Also, though,
\begin{align*}
\abs{I_2}
&\le 2 \smallnorm{\omega^0}_{L^\ensuremath{\infty}} \int_V
\abs{K(X_1(s, x) - X_2(s, y))} \, dy
\le C \ensuremath{\lambda} \smallnorm{\omega^0}_{L^\ensuremath{\infty}},
\end{align*}
again using \cref{e:RearrangementBounds}.
The result then follows from observing that
$\delta(t) \le \ensuremath{\lambda} \ol{\mu}(\delta(t)/\ensuremath{\lambda})$ for $\delta(t) \le \ensuremath{\lambda} e^{-1}$
and $\ol{\mu}(\delta(t)/ \ensuremath{\lambda}) = e^{-1}$ for $\delta(t) \ge \ensuremath{\lambda} e^{-1}$. \end{proof}
\section{Flow map bounds}\label{S:FlowMap}
\noindent In this section we develop bounds related to the flow map for solutions to the Euler equations in $S_h$ on $[0, T]$. First, though, is the matter of existence and uniqueness:
\begin{lemma}\label{L:FlowWellPosed}
Let $h$ be a pre-growth bound\xspace (which we note includes $h(x) = C (1 + \abs{x})$)
and assume that $u \in L^\ensuremath{\infty}(0, T; S_h)$. Then there exists a unique
flow map, $X$, for $u$; that is, a function
$X \colon [0, T] \times \ensuremath{\BB{R}}^2 \to \ensuremath{\BB{R}}^2$ for which
\begin{align*}
X(t, x)
= x + \int_0^t u(s, X(s, x)) \, ds
\end{align*}
for all $(t, x) \in [0, T] \times \ensuremath{\BB{R}}^2$. \end{lemma} \begin{proof}
Because $u$ is locally log-Lipschitz by \cref{P:Morrey},
this is (essentially) classical. \end{proof}
\begin{lemma}\label{L:FlowBounds}
Let $h$ be a pre-growth bound\xspace.
Assume that $u_1, u_2 \in L^\ensuremath{\infty}(0, T; S_h)$.
Let $F_t$ be the function defined in \cref{e:FDef}.
We have,
\begin{align}\label{e:X1X2xBoundTime}
\begin{array}{ll}
\displaystyle \frac{\abs{X_1(t, x) - X_2(t, x)}}{F_t(x)}
\le C_0 t,
&\displaystyle \frac{\abs{X_j(t, x) - x}}{F_t(x)}
\le C_0 t,
\end{array}
\end{align}
where $C_0 = \norm{u_j}_{L^\ensuremath{\infty}(0, T; S_h)}$. \end{lemma} \begin{proof}
For $j = 1, 2$,
$\abs{u_j(t, x)} \le \norm{u_j(t)}_{S_h} \abs{h(x)}$,
so
\begin{align*}
\abs{X_j(t, x)}
&\le \abs{x} + \int_0^t \abs{u_j(s, X_j(s, x))} \,ds
\le \abs{x} + C_0 \int_0^t h(X_j(s,x)) \,ds.
\end{align*}
Hence by Osgood's inequality,
$\abs{X_j(t, x)} \le \Gamma_t(x)$,
where $\Gamma_t$ is defined in \cref{e:GammaDef}.
We also have
\begin{align*}
\abs{X_j(t, x) - x}
&\le \int_0^t \abs{u_j(s, X_j(s, x))} \,ds
\le C_0 \int_0^t h(X_j(s, x)) \,ds
\le C_0 t F_t(x).
\end{align*}
Similarly,
\begin{align*}
\abs{X_1(t, x) - X_2(t, x)}
&\le \int_0^t \pr{\abs{u_1(s, X_1(s, x))}
+ \abs{u_2(s, X_2(s, x))}} \, ds
\le C_0 t F_t(x).
\end{align*}
These bounds yield the result. \end{proof}
\cref{L:FProperties,L:FlowBounds} together show that over time, the flow transports a ``particle'' of fluid at a distance $r$ from the origin by no more than a constant times $h(r)$. This will allow us to control the growth at infinity of the velocity field over time so that it remains in $S_h$ (for at least a finite time), as we shall see in the next section. As the fluid evolves over time, however, the flow can move two points farther and farther apart; that is, its spatial modulus of continuity\xspace can worsen, though in a controlled way, as we show in \cref{L:FlowUpperSpatialMOC}.
(A similar bound to that in \cref{L:FlowUpperSpatialMOC} holds for any growth bound, but we restrict ourselves to the special case of bounded vorticity, bounded velocity velocity fields, for that is all we will need.)
\begin{lemma}\label{L:FlowUpperSpatialMOC}
Let $u \in L^\ensuremath{\infty}(0, T; S_1)$
and let $X$ be the unique flow map for $u$.
Let $C_0 = \norm{u}_{L^\ensuremath{\infty}(0, T; S_1)}$.
For any $t \in [0, T]$ define the function,
\begin{align*}
\chi_t(r)
&:=
\left\{
\begin{array}{ll}
r^{e^{-C_0 t}}
& \text{when } r \le 1 \\
r
& \text{when } r > 1
\end{array}
\right\}
\le r + r^{e^{-C_0 t}}.
\end{align*}
Then for all $x, y \in \ensuremath{\BB{R}}^2$,
\begin{align*}
\abs{X(t, x) - X(t, y)}
&\le C(T) \chi_t(\abs{x - y}).
\end{align*}
The same bound holds for $X^{-1}$.
\end{lemma} \begin{proof}
The bounds,
\begin{align*}
\abs{X(t, x) - X(t, y)}, \abs{X^{-1}(t, x) - X^{-1}(t, y)}
\le
C(T) \abs{x - y}^{e^{-C_0 t}}
\end{align*}
are established in Lemma 8.2 of \cite{MB2002}. We note, however,
that that proof applies only for all sufficiently small $\abs{x - y}$.
A slight refinement of the proof produces the bounds as we have
stated them.
\Ignore{
Let $x, y \in \ensuremath{\BB{R}}^2$.
Then
\begin{align*}
\abs{X(t, x) - X(t, y)}
\le \abs{x - y} + \int_0^t \abs{u(s, X(s, x)) - u(s, X(s, y))} \, ds.
\end{align*}
By
\cref{L:FlowBounds}, so by \cref{P:Morrey},
\begin{align}\label{e:uuDiffBound}
\begin{split}
\abs{u(s, X(s, x)) - u(s, X(s, y))}
&\le C_0 \ol{\mu} \pr{\abs{X(s, x) - X(s, y)}}.
\end{split}
\end{align}
Thus, letting
\begin{align}\label{e:LUpper}
L(s)
:= \abs{X(s, x) - X(s, y)},
\end{align}
we have
\begin{align*}
L(t)
\le \abs{x - y}
+ C_0 \int_0^t
\ol{\mu} \pr{L(s)} \, ds.
\end{align*}
It follows from Osgood's lemma that
\begin{align}\label{e:xyLBound}
\int_{\abs{x - y}}^{L(t)}
\frac{dr}{\ol{\mu} \pr{r}}
\le C_0 t.
\end{align}
We see that this gives $L(t) \to 0$ as $y \to x$, so
for sufficiently small $\abs{x - y}$ that $L(t) \le e^{-1}$
we have, explicitly,
\begin{align*}
-\int_{\abs{x - y}}^{L(t)}
\frac{dr}{r \log r}
\le C_0 h(x) t.
\end{align*}
Integrating and rearranging gives
\begin{align}\label{e:LAbove}
L(t)
\le \abs{x - y}^{e^{-C_0 t}}.
\end{align}
This holds as long as $L(t) \le e^{-1} h(x)$, which holds if
\begin{align*}
\abs{x - y}
\le e^{-e^{C_0 t}}
\le e^{-1}.
\end{align*}
On the other hand, if $\abs{x - y} > e^{-1}$ then $\ol{\mu}(r) = e^{-1}$
in the full range of the integrand in \cref{e:xyLBound}
(unless $L(t) \le \abs{x - y}$, in which case there is nothing to prove),
so that
\begin{align*}
\int_{\abs{x - y}}^{L(t)} dr
\le C_0 e^{-1} t,
\end{align*}
giving
\begin{align*}
L(t)
\le \abs{x - y} + C_0 e^{-1} t
\le \abs{x - y} + C_0 \abs{x - y} t
\le C(T) \abs{x - y},
\end{align*}
the last inequality following since $e^{-1} < \abs{x - y}$.
The third and final possibility occurs in the narrow range in which
\begin{align*}
e^{-e^{C_0 t}}
\le \abs{x - y}
\le e^{-1}.
\end{align*}
In this range, either of the two earlier upper bounds become
valid simply by increasing the constants,
which completes the proof of the bound on
$\abs{X(t, x) - X(t, y)}$.
The same bounds hold for $X^{-1}$, even though the flow is not autonomous,
since the spatial modulus of continuity\xspace of $u$
is bounded by \cref{P:Morrey} uniformly over $[0, T]$.
(See, for instance, the approach in the proof of Lemma 8.2 of \cite{MB2002}.)
} \end{proof}
The following simple bound will be useful later in the proof of \cref{L:ForJ1Bound}: \begin{align}\label{e:chitBound}
\chi_t(a r)
\le a^{e^{-C_0 t}} \chi_t(r)
\text{ for all } a \in [0, 1], r > 0. \end{align}
\Ignore{ \begin{lemma}\label{L:FlowUpperSpatialMOC}
Let $u \in L^\ensuremath{\infty}(0, T; S_h)$ for the growth bound\xspace $h$
and let $X$ be the unique flow map for $u$.
Then
for all $x, y \in \ensuremath{\BB{R}}^2$ with $\abs{y} \le C(1 + \abs{x})$
for an arbitrary fixed constant $C > 0$,
we have
\begin{align*}
\abs{X(t, x) - X(t, y)}
&\le
\begin{cases}
C(T) h(x) \abs{x - y}^{e^{-C_0 t}}
& \text{when } \abs{x - y} \le e^{-1} h(x), \\
C(T) \abs{x - y}
& \text{when } \abs{x - y} > e^{-1} h(x),
\end{cases}
\end{align*}
where $C_0 = \smallnorm{u}_{L^\ensuremath{\infty}(0, T; S_h)}$.
When $u \in L^\ensuremath{\infty}(0, T; S_1)$,
we have, for all $x, y \in \ensuremath{\BB{R}}^2$,
\begin{align*}
\abs{X(t, x) - X(t, y)}
&\le
\begin{cases}
C(T) \abs{x - y}^{e^{-C_0 t}}
& \text{when } \abs{x - y} \le 1, \\
C(T) \abs{x - y}
& \text{when } \abs{x - y} > 1.
\end{cases}
\end{align*}
The same bounds hold for $X^{-1}$.
\end{lemma} \begin{proof}
\Ignore{
Using \cref{L:FlowBounds},
\begin{align*}
\abs{X(t, x) - X(t, y)}
&\le \abs{X(t, x) - x} + \abs{X(t, y) - y} + \abs{x - y} \\
&\le Ct (F_t(x) + F_t(y)) + \abs{x - y}
\le C(T) t + \abs{x - y},
\end{align*}
since $F_t(x) \le C(T) h(x)$ by \cref{L:FProperties}
}
Let $x, y \in \ensuremath{\BB{R}}^2$ with $\abs{y} \le C(1 + \abs{x})$.
Then
\begin{align*}
\abs{X(t, x) - X(t, y)}
\le \abs{x - y} + \int_0^t \abs{u(s, X(s, x)) - u(s, X(s, y))} \, ds.
\end{align*}
But $\abs{X(s, x)}, \abs{X(s, y)}, \abs{X(s, x) - X(s, y)} \le C(T) (1 + \abs{x})$ by
\cref{L:FlowBounds}, so by \cref{P:Morrey},
\begin{align}\label{e:uuDiffBound}
\begin{split}
\abs{u(s, X(s, x)) - u(s, X(s, y))}
&\le C_0
h(x) \ol{\mu} \pr{\frac{\abs{X(s, x) - X(s, y)}}{h(x)}}.
\end{split}
\end{align}
\Ignore{
In the second inequality, we used
$\norm{u(t)}_{L^\ensuremath{\infty}}
\le \smallnorm{u^0}_{L^\ensuremath{\infty}}e^{Ct}$
(as proved in \cref{S:Existence} for the solutions we constructed,
which we know are unique)
and $\norm{\omega(t)}_{L^\ensuremath{\infty}}
= \smallnorm{\omega^0}_{L^\ensuremath{\infty}}$, so that
$
\norm{u(t)}_{S_h}
\le \smallnorm{u^0}_{S_h}e^{Ct}.
$
}
Thus, letting
\begin{align}\label{e:LUpper}
L(s)
:= \abs{X(s, x) - X(s, y)},
\end{align}
we have
\begin{align*}
L(t)
\le \abs{x - y}
+ C_0 h(x) \int_0^t
\ol{\mu} \pr{\frac{L(s)}{h(x)}} \, ds.
\end{align*}
It follows from Osgood's lemma that
\begin{align}\label{e:xyLBound}
\int_{\abs{x - y}}^{L(t)}
\frac{dr}{\ol{\mu} \pr{\frac{r}{h(x)}}}
\le C_0 h(x) t.
\end{align}
We see that this gives $L(t) \to 0$ as $y \to x$, so
for sufficiently small $\abs{x - y}$ that $L(t) \le e^{-1} h(x)$
we have, explicitly,
\begin{align*}
-\int_{\abs{x - y}}^{L(t)}
\frac{dr}{\frac{r}{h(x)} \log {\frac{r}{h(x)}}}
\le C_0 h(x) t.
\end{align*}
We rewrite this as
\begin{align*}
-\int_{\abs{x - y}}^{L(t)}
\frac{d (\frac{r}{h(x)})}{\frac{r}{h(x)} \log {\frac{r}{h(x)}}}
\le C_0 t
\end{align*}
so that
\begin{align*}
-\int_{\abs{x - y}/h(x)}^{L(t)/h(x)}
\frac{d w}{w \log w}
\le C_0 t.
\end{align*}
Integrating gives
\begin{align*}
- \log\log s\big\vert_{\abs{x - y}/h(x)}^{L(t)/h(x)} \le C_0 t.
\end{align*}
Hence,
\begin{align*}
\frac{L(t)}{h(x)}
\le \pr{\frac{\abs{x - y}}{h(x)}}^{e^{-C_0 t}},
\end{align*}
or,
\begin{align}\label{e:LAbove}
L(t)
\le h(x)^{1 - e^{-C_0 t}} \abs{x - y}^{e^{-C_0 t}}
\le C h(x) \abs{x - y}^{e^{-C_0 t}}.
\end{align}
This holds as long as $L(t) \le e^{-1} h(x)$, which holds
(using the first inequality in \cref{e:LAbove}) if
\begin{align*}
\abs{x - y}
\le e^{-e^{C_0 t}} h(x)
\le e^{-1} h(x).
\end{align*}
On the other hand, if $\abs{x - y} > h(x) e^{-1}$ then $\ol{\mu}(r/h(x)) = e^{-1}$
in the full range of the integrand in \cref{e:xyLBound}
(unless $L(t) \le \abs{x - y}$, in which case there is nothing to prove),
so that
\begin{align*}
\int_{\abs{x - y}}^{L(t)} dr
\le C_0 e^{-1} h(x) t,
\end{align*}
giving
\begin{align*}
L(t)
\le \abs{x - y} + C_0 e^{-1} h(x) t
\le \abs{x - y} + C_0 \abs{x - y} t
\le C(T) \abs{x - y},
\end{align*}
the last inequality following since $e^{-1} h(x) < \abs{x - y}$.
The third and final possibility occurs in the narrow range in which
\begin{align*}
e^{-e^{C_0 t}} h(x)
\le \abs{x - y}
\le e^{-1} h(x)
\end{align*}
(and similarly for the lower bound).
In this range, either of the two earlier upper (lower) bounds become
valid simply by increasing (decreasing) the constants,
which completes the proof of the first bound on
$\abs{X(t, x) - X(t, y)}$. The second bound follows directly from
the first.
\Ignore{
we have
\begin{align*}
-\int_{\abs{x - y}}^{e^{-1}}
\frac{dr}{\frac{r}{h(x)} \log {\frac{r}{h(x)}}}
+ e \int_{e^{-1}}^{L(t)} dr
\le C_0 h(x) t.
\end{align*}
We rewrite this in the form,
\begin{align*}
-\int_{\abs{x - y}}^{e^{-1}}
\frac{d (\frac{r}{h(x)})}{\frac{r}{h(x)} \log {\frac{r}{h(x)}}}
+ e h(x)^{-1} \int_{e^{-1}}^{L(t)} dr
\le C_0 t,
\end{align*}
and integrate as before, giving
\begin{align*}
- \log\log s\big\vert_{\abs{x - y}/h(x)}^{(e h(x))^{-1}}
+ e h(x)^{-1} \brac{L(t) - e^{-1}}
\le C_0 t.
\end{align*}
This leads to
\begin{align*}
L(t)
&\le e^{-1} h(x) \brac{C_0 t
+ \log\log s\big\vert_{\abs{x - y}/h(x)}^{(e h(x))^{-1}}}
+ e^{-1} \\
&= e^{-1} h(x) \brac{C_0 t
+ \log \log \pr{\frac{(e h(x))^{-1}}{\abs{x - y}/h(x)}}}
+ e^{-1} \\
&= e^{-1} h(x) \brac{C_0 t
+ \log \log \pr{\frac{e^{-1}}{\abs{x - y}}}}
+ e^{-1}
\end{align*}
}
\begin{comment}
We seen, then, that
\begin{align*}
L(t)
&\le
\begin{cases}
C(T) h(x) \abs{x - y}^{e^{-C_0 t}}
& \text{when } \abs{x - y} \le C(T) h(x), \\
C(T) \abs{x - y}
& \text{when } \abs{x - y} > C(T) h(x).
\end{cases}
\end{align*}
\end{comment}
The same bounds hold for $X^{-1}$, even though the flow is not autonomous,
since the spatial modulus of continuity\xspace of $u$
is bounded by \cref{P:Morrey} uniformly over $[0, T]$.
(See, for instance, the approach in the proof of Lemma 8.2 of \cite{MB2002}.) \end{proof} }
\begin{comment} \begin{lemma}\label{L:FlowLowerSpatialMOC}
Let $u \in L^\ensuremath{\infty}(0, T; S_h)$ for the growth bound\xspace $h$. Then
for all $x, y \in \ensuremath{\BB{R}}^2$ with
$\abs{x} \ge \abs{y}$,
\begin{align*}
\abs{X(t, x) - X(t, y)}
&\ge
\begin{cases}
C(T) h(x)^{1 - e^{C_0t}} \abs{x - y}^{e^{C_0 t}}
& \text{when } \abs{x - y} \le e^{-1} h(x), \\
(1/2) \abs{x - y}
& \text{when } \abs{x - y} > e^{-1} h(x),
\; t < C_0^{-1},
\end{cases}
\end{align*}
where $C_0 = \smallnorm{u^0}_{S_h}e^{Ct}$.
The same bounds hold for $X^{-1}$.
\end{lemma} \begin{proof}
We proceed as in \cref{L:FlowUpperSpatialMOC}, though now
obtaining lower bounds.
Let $x, y \in \ensuremath{\BB{R}}^2$ with $\abs{x} \ge \abs{y}$.
Then by the reverse triange inequality,
\begin{align*}
\abs{X(t, x) - X(t, y)}
&\ge \abs{\abs{x - y} - \abs{\int_0^t u(s, X(s, x)) - u(s, X(s, y))}
\, ds} \\
&\ge \abs{x - y} - \abs{\int_0^t u(s, X(s, x)) - u(s, X(s, y))
\, ds} \\
&\ge \abs{x - y} - \int_0^t \abs{u(s, X(s, x)) - u(s, X(s, y))}
\, ds.
\end{align*}
Note that these inequalities hold even if the one or both of the final
two expressions are negative.
Noting that \cref{e:uuDiffBound} still holds, and defining $L$
as in \cref{e:LUpper}, we have
\begin{align*}
L(t)
&\ge \abs{X(t, x) - X(t, y)}
\ge \abs{x - y} - \int_0^t
C_0 h(x) \ol{\mu} \pr{\frac{\abs{X(s, x) - X(s, y)}}{h(x)}}
\, ds. \\
&\ge \abs{x - y}
- C_0 h(x) \int_0^t
\ol{\mu} \pr{\frac{L(s)}{h(x)}} \, ds.
\end{align*}
Rather than applying Osgood's lemma now, we apply \cref{L:ReverseOsgood},
which inverts the roles
$L(t)$ and $\abs{x - y}$ played in the proof of
\cref{L:FlowUpperSpatialMOC}. This leads to
\begin{align*}
\frac{\abs{x - y}}{h(x)}
\le \pr{\frac{L(t)}{h(x)}}^{e^{-C_0 t}},
\end{align*}
when $\abs{x - y} \le e^{-1} h(x)$, which yields
the lower bound for this case.
On the other hand, when $L(t) > h(x) e^{-1}$
invert the roles
of $L(t)$ and $\abs{x - y}$ in the proof of
\cref{L:FlowUpperSpatialMOC}
yields
\begin{align*}
L(t)
\ge \abs{x - y} - C_0 e^{-1} h(x) t.
\end{align*}
Now, we will have $L(t) \ge \abs{x - y}$ if $\abs{x - y} > h(x) e^{-1}$.
In that case, we will have $\abs{x - y} - C_0 e^{-1} h(x) t \ge \abs{x - y}/2$
as long as
$
h(x) e^{-1}(1 - C_0t) > 1/2.
$
This yields the lower bound for this case.
\end{proof}
\begin{lemma}[Reverse Osgood's Lemma]\label{L:ReverseOsgood}
Let $L \ge 0$ be integrable on $[0, \ensuremath{\infty})$ and $\gamma \ge 0$ be locally integrable
on $[0, \ensuremath{\infty})$. Let $\mu$ be a continuous nondecreasing function with $\mu(0) = 0$ and
$\mu > 0$ on $(0, \ensuremath{\infty})$. Let $a \ge 0$ and assume that
\begin{align*}
L(t) \ge a - \int_0^t \gamma(s) \mu(L(s)) \, ds.
\end{align*}
If $a > 0$ then
\begin{align*}
\int_{L(t)}^a \frac{ds}{\mu(s)}
\le \int_0^t \gamma(s) \, ds.
\end{align*}
\end{lemma} \begin{proof}
Since
\begin{align*}
a
\le L(t) + \int_0^t \gamma(s) \mu(L(s)) \, ds,
\end{align*}
we have,
\begin{align*}
\int_{L(t)}^a \frac{ds}{\mu(s)}
&\le \int_{L(t)}^{L(t) + \int_0^t \gamma(z) \mu(L(s)) \, dz} \frac{ds}{\mu(s)} \\
&= \int_0^t \frac{\gamma(w) \mu(L(w))}{\mu(L(w) + \int_0^w \gamma(z) \mu(L(s)) \, dz}
\, dw \\
&\le \int_0^t \gamma(w) \, dw.
\end{align*}
Note that the first inequality holds whether or not $a \ge L(t)$.
In the equality, we made the change of variables,
\begin{align*}
s = L(w) + \int_0^w \gamma(z) \mu(L(s)) \, dz.
\end{align*} \end{proof} \end{comment}
\section{Existence}\label{S:Existence}
\noindent Our proof of existence differs significantly from that in \cite{AKLN2015} only in the use of the Serfati identity to obtain a bound in $L^\ensuremath{\infty}(0, T; S_h)$ of a sequence of approximating solutions and to show that the sequence is Cauchy, which is more involved than in \cite{AKLN2015}. Although velocities in $S_h$ are not log-Lipschitz in the whole plane (unless $h$ is constant), they are log-Lipschitz in any compact subset of $\ensuremath{\BB{R}}^2$. Since the majority of the proof of existence involves obtaining convergence on compact subsets, this has little effect on the proof. Therefore, we give only the details of the bound on $L^\ensuremath{\infty}(0, T; S_h)$ using the Serfati identity, as this is the main modification of the existence proof. We refer the reader to \cite{AKLN2015} for the remainder of the argument.
\begin{proof}[\textbf{Proof of existence in \cref{T:Existence}}] Let $u^0 \in S_h$ and assume that $u^0$ does not vanish identically; otherwise, there is nothing to prove.
Let $(u_n^0)_{n=1}^\ensuremath{\infty}$ and $(\omega_n^0)_{n=1}^\ensuremath{\infty}$ be compactly supported approximating sequences to the initial velocity, $u^0$, and initial vorticity, $\omega^0$, obtained by cutting off the stream function and mollifying by a smooth, compactly supported mollifier. (This is as done in Proposition B.2 of \cite{AKLN2015}, which simplifies tremendously when specializing to all of $\ensuremath{\BB{R}}^2$.)
Let $u_n$ be the classical, smooth solution to the Euler equations with initial velocity $u_n^0$, and note that its vorticity is compactly supported for all time. The existence and uniqueness of such solutions follows, for instance, from \cite{McGrath1967} and references therein. (See also Chapter 4 of \cite{MB2002} or Chapter 4 of \cite{C1998}.) Finally, let $\omega_n = \curl u_n$.
As we stated above, we give only the uniform $L^\ensuremath{\infty}([0, T]; S_h)$ bound for this sequence, the rest of the proof differing little from that in \cite{AKLN2015}.
We have, \begin{align}\label{e:omegaunLInfBound}
\norm{u_n^0}_{S_h} \le C \norm{u^0}_{S_h}. \end{align}
It follows as in Proposition 4.1 of \cite{AKLN2015} that the Serfati identity \cref{e:SerfatiId} holds for the approximate solutions. It is important to note that $x$ and $t$ are fixed in this identity, so $\ensuremath{\lambda}$ can be a function both of $t$ and $x$ (though not $s$). Or, to see this more expicitly, we can write the critical convolution in the derivation of the Serfati identity in Proposition 4.1 of \cite{AKLN2015} as, \begin{align*}
((1 - a_{\ensuremath{\lambda}(t, x)}) &K^j) * \ensuremath{\partial}_s \omega(x)
= \int_{\ensuremath{\BB{R}}^2} ((1 - a_{\ensuremath{\lambda}(t, x)}(x - y))
K^j(x - y)) \ensuremath{\partial}_s \omega(y) \, dy, \end{align*} and it becomes clear that in moving derivatives from one side of the convolution to another we are in effect integrating by parts, taking derivatives always in the variable $y$.
In any case, it follows from the Serfati identity that \begin{align*}
\abs{u_n(t, x)}
\le &\abs{u_n^0(x)}
+ \abs{(a_\lambda K) *(\omega_n(t) - \omega_n^0)(x)} \\
&
+ \int_0^t \abs{\pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp \brac{(1 - a_\lambda) K}}
\mathop{* \cdot} (u_n \otimes u_n)(s, x)} \, ds. \end{align*}
The first convolution we bound using \cref{e:aKBound} and \cref{e:omegaunLInfBound}
as \begin{align*}
\abs{(a_\ensuremath{\lambda} K) *(\omega_n(t) - \omega_n^0)(x)}
\le C \pr{
\lambda\smallnorm{\omega_n(t)}_{L^\ensuremath{\infty}(B_\ensuremath{\lambda}(x))}
+ \lambda\smallnorm{\omega_n^0}_{L^\ensuremath{\infty}(B_\ensuremath{\lambda}(x))}}
\le C \ensuremath{\lambda}. \end{align*}
For the second convolution, we have, using \cref{P:gradgradaKBound}, \begin{align*}
\begin{split}
&\abs{\pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp
\brac{(1 - a_\ensuremath{\lambda}) K}}
\mathop{* \cdot} (u_n \otimes u_n)(s)}
\le \int_{B_{\ensuremath{\lambda}/2}(x)^C} \frac{C}{\abs{x - y}^3}
\abs{u_n(s, y)}^2 \, dy \\
&\qquad
= C \int_{B_{\ensuremath{\lambda}/2}(x)^C} \frac{h(y)^2}{\abs{x - y}^3}
\abs{\frac{u_n(s, y)}{h(y)}}^2 \, dy \\
&\qquad
\le C \norm{\frac{u_n(s)}{h}}_{L^\ensuremath{\infty}}^2
\brac{
\int_{B_{\ensuremath{\lambda}/2}(x)^C} \frac{h (x - y)^2}
{\abs{x - y}^3} \, dy
+ h(x)^2 \int_{B_{\ensuremath{\lambda}/2}(x)^C}
\frac{1}{\abs{x - y}^3} \, dy
} \\
&\qquad
= C \norm{\frac{u_n(s)}{h}}_{L^\ensuremath{\infty}}^2
\brac{H(\ensuremath{\lambda}(x)/2) + C \frac{h(x)^2}{\ensuremath{\lambda}(x)}
},
\end{split} \end{align*} where $H = H[h^2]$ is defined in \cref{e:HDef}. The second inequality
follows from the subadditivity of $h^2$ (as in \cref{R:hSubadditive}).
Hence, \begin{align*}
\abs{u_n(t, x)}
\le &\abs{u_n^0(x)}
+ C \lambda(x)
+ C \int_0^t \norm{\frac{u_n(s)}{h}}_{L^\ensuremath{\infty}}^2
\brac{
H(\ensuremath{\lambda}(x)/2) + C \frac{h(x)^2}{\ensuremath{\lambda}(x)}
} \, ds. \end{align*} Dividing both sides by $h(x)$ gives \begin{align}\label{e:KeyExistenceBound}
\begin{split}
\abs{\frac{u_n(t, x)}{h(x)}}
\le &\abs{\frac{u_n^0(x)}{h(x)}}
+ C \frac{\lambda(x)}{h(x)}
+ C \int_0^t \norm{\frac{u_n(s)}{h}}_{L^\ensuremath{\infty}}^2
\brac{
\frac{H(\ensuremath{\lambda}(x)/2)}{h(x)} + C \frac{h(x)}{\ensuremath{\lambda}(x)}
} \, ds.
\end{split} \end{align}
Now, for any fixed $t$, we can set \begin{align}\label{e:laChoice}
\ensuremath{\lambda}
= \ensuremath{\lambda}(t, x)
= 2 h(x)
\pr{\int_0^t \norm{\frac{u_n(s)}{h}}_{L^\ensuremath{\infty}}^2
\, ds}
^{\frac{1}{2}}, \end{align} which we note nearly minimizes the right-hand side of \cref{e:KeyExistenceBound}. Defining \begin{align*}
\Lambda(s)
:= \norm{\frac{u_n(s)}{h}}_{L^\ensuremath{\infty}}^2, \end{align*} this leads to \begingroup \allowdisplaybreaks \begin{align*}
\abs{\frac{u_n(t, x)}{h(x)}}
\le &\abs{\frac{u_n^0(x)}{h(x)}}
+ C \smallnorm{\omega^0}_{L^\ensuremath{\infty}}
\pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}} \\
&\qquad
+ C \int_0^t \Lambda(s) H \pr{h(x) \pr{\int_0^t \Lambda(r) \, dr}^{\frac{1}{2}}
}g(x) \, ds
+ C \pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}} \\
&\le C
+ C g(x)H \pr{h(x) \pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}}}
\int_0^t \Lambda(s) \, ds
+ C \pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}} \\
&\le C
+ C H \pr{\pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}}}
\int_0^t \Lambda(s) \, ds
+ C \pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}} \\
&\le C + C \pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}}
+ C f \pr{\pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}}}
\pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}} \\
&\le C
+ C \brac{1 + f \pr{\pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}}}}
\pr{\int_0^t \Lambda(s) \, ds}^{\frac{1}{2}}, \end{align*} \endgroup where $f(z) := z H(z)$. In the third-to-last inequality we used that $H$ is decreasing and $h(x) \ge h(0) > 0$.
Observe that although this inequality was obtained by choosing $\ensuremath{\lambda} = \ensuremath{\lambda}(t, x)$ for one fixed $t$, the inequality itself holds for all $t \in [0, T]$.
Taking the supremum over $x \in \ensuremath{\BB{R}}^2$ and squaring both sides, we have \begin{align}\label{e:LBoundM}
\Lambda(t)
\le C + E \pr{\int_0^t \Lambda(s) \, ds}
\le C + \mu \pr{\int_0^t \Lambda(s) \, ds}, \end{align} where $E$, $\mu$ are as in \cref{D:GrowthBound}. Now we can apply \cref{L:EProp,L:NotOsgood} to conclude that $u_n \in L^\ensuremath{\infty}(0, T; S_h)$ with a norm bounded uniformly over $n$. \cref{L:NotOsgood} also gives global-in-time existence ($T$ arbitrarily large) when \cref{e:omuOsgoodAtInfinity} holds. \end{proof}
\begin{lemma}\label{L:NotOsgood}
Assume that $\Lambda \colon [0, \ensuremath{\infty}) \to [0, \ensuremath{\infty})$ is continuous with
\begin{align}\label{e:LBound}
\Lambda(t)
\le \Lambda_0 + \mu \pr{\int_0^t \Lambda(s) \, ds}
\end{align}
for some $\Lambda_0 \ge 0$,
where $\mu \colon [0, \ensuremath{\infty}) \to [0, \ensuremath{\infty})$ is convex.
Then for all $t \le 1$,
\begin{align}\label{e:t1Bound}
\int_{\Lambda_0}^{\Lambda(t)} \frac{ds}{\mu(s)}
\le t
\end{align}
and for all $t \in [0, T]$ for any fixed $T \ge 1$,
\begin{align}\label{e:TLargeBound}
\int_{\Lambda_0}^{\Lambda(t)} \frac{ds}{\mu(T s)}
\le \frac{t}{T}.
\end{align}
Moreover, if
\begin{align}\label{e:NotOsgood}
\int_1^\ensuremath{\infty} \frac{ds}{\mu(s)} = \ensuremath{\infty}
\end{align}
then $\Lambda \in L^\ensuremath{\infty}_{loc}([0, \ensuremath{\infty}))$. \end{lemma} \begin{proof}
Because $\mu$ is convex,
we can apply Jensen's inequality
to conclude that
\begin{align*}
\Lambda(t)
\le \Lambda_0 + \mu \pr{\int_0^t t \Lambda(s) \, \frac{ds}{t}}
\le \Lambda_0 + \int_0^t \mu(t \Lambda(s)) \, \frac{ds}{t}.
\end{align*}
As long as $t \le 1$, \cref{R:mu} allows us to write
\begin{align*}
\Lambda(t)
\le \Lambda_0 + \int_0^t \mu(\Lambda(s)) \, ds,
\end{align*}
and Osgood's lemma gives \cref{e:t1Bound}.
Now suppose that $T > 1$. Then \cref{R:mu} gives the weaker bound,
\begin{align*}
\mu(t \Lambda(s))
= \mu \pr{\frac{t}{T} T \Lambda(s)}
\le \frac{t}{T} \mu \pr{T \Lambda(s)}
\end{align*}
so that
\begin{align*}
\Lambda(t)
\le \Lambda_0 + \frac{1}{T} \int_0^t \mu(T \Lambda(s)) \, ds,
\end{align*}
leading to \cref{e:TLargeBound}.
Finally, if \cref{e:NotOsgood} holds then applying
Osgood's lemma to \cref{e:TLargeBound} shows that
$\Lambda$ is bounded on any interval $[0, T]$,
so that $\Lambda \in L^\ensuremath{\infty}_{loc}([0, \ensuremath{\infty}))$. \end{proof}
We make a few remarks on our proof of \cref{T:Existence}.
\cref{L:NotOsgood} allows us to obtain finite-time or global-in-time
existence of solutions, but unless we have a stronger condition on $\mu$,
neither the finite time of existence
nor the bound on the growth of the $L^\ensuremath{\infty}$ norm that results will
be optimal. For both of our example growth bounds in \cref{C:MainResult}
there are stronger conditions; namely, if $\mu_1$, $\mu_2$ are the function
in \cref{D:GrowthBound} corresponding to $h_1$, $h_2$ then
for all $a, r \ge 0$,
\begin{align*}
\mu_1(a r) \le C_0 a^{1 + \ensuremath{\alpha}} \mu_1(r), \quad
\mu_2(a r) \le C_0 a \mu_2(r).
\end{align*}
It is easy to see that the condition on $\mu_2$ in fact, implies
\cref{e:NotOsgood}, though the condition on $\mu_1$ is too weak
to do so. Both conditions improve the bound on the $L^\ensuremath{\infty}$ norm
resulting from \cref{L:NotOsgood} and for $h_1$, the time of existence.
Moreover, \cref{L:EProp} shows that, up to a constant factor, $\mu(r) := C r(1 + r)$ works for all well-posedness growth bounds\xspace (and gives $\mu_2(a r) \le C_0 a^2 \mu_2(r)$ for all $a, r \ge 0$). This \textit{suggests} that a slight weakening of the condition we placed on growth bounds\xspace in $(ii)$ of \cref{D:GrowthBound} could be made that would still allow finite-time existence to be obtained.
\Ignore{
On the surface, \cref{e:omuOsgoodAtInfinity} looks like Osgood's lemma
(see Lemma 5.2.1 of \cite{C1998}), but it is not:
The function $\mu$ in \cref{e:LBound} appears outside rather
than inside the integral and the integration in \cref{e:NotOsgood} is from
$1$ to $\ensuremath{\infty}$ rather than from $0$ to $1$.
Also, in a typical application of Osgood's
lemma $\mu$ is a concave modulus of continuity rather
than a convex function.
As we see from its proof, however, it follows
as a corollary of Osgood's lemma. }
\Ignore{
\ProofStep{Step 3. Log-Lipschitz bound on modulus of continuity\xspace of $(u_n)$ uniform in $n$.} Recall the definition of the space of log-Lipschitz functions $LL$ on $U \subseteq \ensuremath{\BB{R}}^2$: \begin{align} \label{e:LL}
LL(U) = \left\{ f \in L^{\ensuremath{\infty}}(U) \;\Big|\; \sup_{x\neq y}
\frac{|f(x)-f(y)|}{(1+\log^+|x-y|)|x-y|} <\infty \right\}, \end{align} where $\log^+(z)=\max\{-\log z, 0\}$. This is a Banach space under the norm given by
\[\|f\|_{LL}:= \|f\|_{L^{\infty}} + \sup_{x\neq y} \frac{|f(x)-f(y)|}{(1+\log^+|x-y|)|x-y|}.\]
For any compact subset $L$ of $\ensuremath{\BB{R}}^2$, we have, \begin{align*}
\norm{u_n(t)}_{LL}
\le C \norm{u_0}_{S_h} \end{align*} This follows as in Lemma B.3 of \cite{AKLN2015} for some $C = C(L, \norm{u_0}_{S_h})$, using the result of Step 2.
\ProofStep{Step 4. Convergence of flow maps.} Associated to each (smooth) $u_n$ there is a unique (smooth) forward flow map, $X_n$. Much as in Lemma 8.2 of \cite{MB2002} or Chapter 5 of \cite{C1998}, we conclude that for all $x_1, x_2$ lying in the compact subset $L$ of $\ensuremath{\BB{R}}^2$, \begin{align*}
\abs{X_n(t,x_1)-X_n(t,x_2)}
&\le C \abs{x_1-x_2}^{e^{-\norm{u_n}_{LL} \abs{T}}}, \\
\abs{X_n^{-1}(t,y_1)-X_n^{-1}(t,y_2)}
&\le C \abs{y_1-y_2}^{e^{-\norm{u_n}_{LL} \abs{T}}} \end{align*} and that \begin{align*}
\abs{X_n(t_1, x) - X_n(t_2, x)}
&\le \norm{u_n}_{L^\ensuremath{\infty}([0, T] \times \ensuremath{\BB{R}}^2)} \abs{t_1 - t_2}
\le C \abs{t_1 - t_2}, \\
\abs{X_n^{-1}(t_1, y) - X_n^{-1}(t_2, y)}
&\le \norm{u_n}_{L^\ensuremath{\infty}([0, T] \times \ensuremath{\BB{R}}^2)} \abs{t_1 - t_2}^{e^{-\norm{u_n}_{LL} \abs{T}}}
\le C \abs{t_1 - t_2}^{e^{-\norm{u_n}_{LL} \abs{T}}}, \end{align*} where $C = C(V, \norm{u_0}_{S_h})$.
These estimates yield a subsequence that converges uniformly on any compact subset $L$ of $[0, T] \times \ensuremath{\BB{R}}^2$. We relabel this subsequence, $(X_n)$.
\ProofStep{Step 5. Convergence of vorticities}: Define, a.e. $t \in [0,T]$, $\omega(t, x) := \omega^0(X^{-1}(t, x))$. Then $\omega_n \to \omega$ in $L^\ensuremath{\infty}(0, T; L^p_{loc}(\ensuremath{\BB{R}}^2))$ for all $p \in [1, \ensuremath{\infty})$ follows from a simple adaptation of the proof for bounded vorticity on page 316 of \cite{MB2002}, that $\omega_n(t) \to \omega(t)$ in $L^1(\ensuremath{\BB{R}}^2)$.
\ProofStep{Step 6. Velocities are Cauchy in $C([0, T] \times L)$}: We have established convergence of the flow maps (and its inverse maps) to a limiting flow map (and its inverse) and convergence of the vorticities to a limiting vorticity, which is transported by the limiting flow map. As shown in Step 3, we also have equicontinuity of $(u_n)$ locally in space. We will now use the Serfati identity once more to show that the sequence, $(u_n)$, is Cauchy in $C([0, T] \times L)$, for any compact subset, $L$, of $\ensuremath{\BB{R}}^2$.
Let $R > 0$ be such that $L \subseteq B_R(0)$. Let $x$ belong to $L$ and let $L_\lambda = L + B_{c \lambda}(0)$, where $a$ is supported in $B_c(0)$. From \cref{e:SerfatiId}, for any \textit{fixed} $\lambda > 0$, \begin{align}\label{e:I1I2I3Bound}
&\abs{u_n(t, x) - u_m(t, x)}
\le \abs{u_n^0(x) - u_m^0(x)} + I_1 + I_2 + I_3, \end{align} where \begin{align*}
I_1
&= \abs{(a_\lambda K^j) *(\omega_n(t) - \omega_m(t))}, \;
I_2
= \abs{(a_\lambda K^j) *(\omega_n^0 - \omega_m^0)}, \\
I_3
&= \int_0^t \abs{\pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp \brac{(1 - a_\lambda) K^j}}
\mathop{* \cdot} (u_n \otimes u_n - u_m \otimes u_m)(s)} \, ds. \end{align*}
Fix $q$ in $(2, \ensuremath{\infty})$ and let $p$ in $(1, 2)$ be the \Holder exponent conjugate to $q$. Then from \cref{e:RearrangementBounds} and Young's convolution inequality, \begin{align*}
I_1
&\le C \smallnorm{a_\lambda(x - \cdot) K(x - \cdot)}_{L^p(L_\lambda)}
\smallnorm{\omega_n(t) - \omega_m(t)}_{L^q(L_\lambda)} \\
&\le \frac{C \lambda^{2 - p}}{2 - p}
\smallnorm{\omega_n(t) - \omega_m(t)}_{L^q(L_\lambda)} \end{align*} and, similarly, \begin{align*}
I_2
&\le \frac{C \lambda^{2 - p}}{2 - p}
\smallnorm{\omega^0_n - \omega^0_m}_{L^q(L_\lambda)}. \end{align*}
To bound $I_3$, we proceed much as in Step 2: \begin{align*}
I_3
&\le \int_0^t \int_{B_\ensuremath{\lambda}^C} \frac{h(y)^2}{\abs{x - y}^3}
\abs{\pr{\frac{u_n}{h} \otimes \frac{u_n}{h}
- \frac{u_m}{h} \otimes \frac{u_m}{h}}(s, y)}
\, dy \, ds \\
&\le C \brac{\sum_{k = {n, m}} \norm{\frac{u_k}{h}}_{L^\ensuremath{\infty}((0, T) \times \ensuremath{\BB{R}}^2)}}^2
\int_0^t \int_{B_\ensuremath{\lambda}(x)^C} \frac{h(y)^2}{\abs{x - y}^3}
\, dy \, ds \\
&\le C t
\brac{
\int_{B_\ensuremath{\lambda}(x)^C} \frac{h(x - y)^2}
{\abs{x - y}^3} \, dy
+ C h(x)^2 \int_{B_\ensuremath{\lambda}(x)^C}
\frac{1}{\abs{x - y}^3}
\, dy
} \\
&\le C t \brac{H(\ensuremath{\lambda}) + C \frac{h(x)^2}{\ensuremath{\lambda}}}
\le C T \brac{H(\ensuremath{\lambda}) + C \frac{h(x) h(R)}{\ensuremath{\lambda}}}, \end{align*} where $C = C(u^0, T)$. Here, we used \cref{e:D2KBound} and
the uniform bound on the sequence $(u_k)$ in $L^\ensuremath{\infty}([0, T] \times \ensuremath{\BB{R}}^2)$.
Thus, after dividing both sides of \cref{e:I1I2I3Bound} by $h(x)$, \begin{align*}
&\abs{\frac{u_n}{h}(t, x) - \frac{u_m}{h}(t, x)}
\le \abs{\frac{u_n^0}{h}(x) - \frac{u_m^0}{h}(x)}
+ C T \brac{H(\ensuremath{\lambda}) + C \frac{h(R)}{\ensuremath{\lambda}}} \\
&\qquad
+ \frac{C \lambda^{2 - p}}{2 - p}
\brac{\norm{\omega_n(t, \cdot) - \omega_m(t, \cdot)}_
{L^q(L_\lambda)}
+ \norm{\omega_n^0 - \omega_m^0}_{L^q(L_\lambda)}}. \end{align*}
Now let $\delta > 0$ be given. Because $H$ is decreasing, we can choose $\ensuremath{\lambda}$ large enough that \begin{align*}
C T \brac{H(\ensuremath{\lambda}) + C \frac{h(R)}{\ensuremath{\lambda}}}
< \frac{\delta}{3 h(R)}. \end{align*} With this now fixed $\ensuremath{\lambda}$, the result of Step 5 allows use to choose $N$ large enough that \begin{align*}
\frac{C \lambda^{2 - p}}{2 - p}
\brac{\norm{\omega_n(t, \cdot) - \omega_m(t, \cdot)}_
{L^q(L_\lambda)}
+ \norm{\omega_n^0 - \omega_m^0}_{L^q(L_\lambda)}}
< \frac{\delta}{3 h(R)} \end{align*} and $\norm{u_n^0/h - u_m^0/h}_{L^\ensuremath{\infty}(L)} < \delta / (3 h(R))$ for all $n, m > N$. Since these bounds hold for all $x \in L$, it follows that \begin{align*}
\norm{u_n - u_m}_{L^\ensuremath{\infty}([0, T] \times L)}
\le h(R) \norm{\frac{u_n}{h} - \frac{u_m}{h}}_{L^\ensuremath{\infty}([0, T] \times L)}
\le 3 \frac{h(R) \delta}{3 h(R)}
= \delta. \end{align*} This shows that the sequence, $(u_n)$, is Cauchy in $C([0, T] \times L)$ (without the need to take a further subsequence).
\begin{remark}
The only property of $H$ we used in this step is that it is decreasing. \end{remark}
\ProofStep{Step 7. Convergence to a solution}: The convergence of $(u_n)$ to a solution to $\ensuremath{\partial}_t \omega + u \cdot \ensuremath{\nabla} \omega = 0$ in $\Cal{D}'$ is standard. That the Serfati identity \cref{e:SerfatiId} holds for $u$ regardless of the choice of the cutoff function, $a$, follows from these same convergences and the observation that $(u_n)$ is bounded in $L^\ensuremath{\infty}$.
\ProofStep{Step 8. Modulus of continuity\xspace of the velocity}: The limit velocity $u(t)$ has a log-Lipschitz modulus of continuity\xspace locally to any compact subset; this follows either from \cref{P:Morrey} or directly from the convergence of $(u_n)$ with a uniform bound on the log-Lipschitz modulus of continuity\xspace on compact subsets. \end{proof} }
\section{Uniqueness}\label{S:Uniqueness}
\noindent In this section we prove \cref{T:ProtoUniqueness}, from which uniqueness immediately follows. Our argument is a an adaptation of the approach of Serfati as it appears in \cite{AKLN2015}. It starts, however, by exploiting the flow map estimates in \cref{L:FlowBounds}, inspired by the proof of Lemma 2.13 of \cite{ElgindiJeong2016}, which is itself an adaptation of Marchioro's and Pulvirenti's elegant uniqueness proof for 2D Euler in \cite{MP1994}, in which a weight is introduced.
\begin{proof}[\textbf{Proof of \cref{T:ProtoUniqueness}}]
We will use the bound on $X_1$ and $X_2$ given
by \cref{L:FlowBounds} with the growth bound\xspace,
$F_T[\zeta]$, defined in \cref{e:FDef}.
This is valid since $\zeta \ge h$.
By \cref{L:FProperties}, $F_T[\zeta]$ is a growth bound\xspace
that is equivalent to $\zeta$, up to a factor of $C(T)$;
hence, we will use $\zeta$ in place of $F_T[\zeta]$,
which will simply introduce a factor $C(T)$ into our
bounds.
By the expression for the flow maps in \cref{L:FlowWellPosed},
\begin{align*}
\eta(t)
\le \norm{
\int_0^t \frac{\abs{u_1(s, X_1(s, x)) - u_2(s, X_2(s, x))}}
{\zeta(x)} \, ds
}
\le \int_0^t L(s) \, ds
= M(t).
\end{align*}
We also have,
\begin{align*}
L(s)
\le \norm{A_1(s, x)}_{L^\ensuremath{\infty}_x(\ensuremath{\BB{R}}^2)}
+ \norm{A_2(s, x)}_{L^\ensuremath{\infty}_x(\ensuremath{\BB{R}}^2)},
\end{align*}
where
\begin{align*}
A_1(s, x)
:= \frac{u_2(s, X_1(s, x)) - u_2(s, X_2(s, x))}{\zeta(x)}, \\
A_2(s, x)
:= \frac{u_1(s, X_1(s, x)) - u_2(s, X_1(s, x))}{\zeta(x)}.
\end{align*}
For $A_1$, first observe that \cref{L:FlowBounds} shows that
$\abs{X_1(s, x) - X_2(s, x)} \le Ct \zeta(x) \le Ct (1 + \abs{x})$ and
$\abs{X_1(s, x) - x} \le Ct \zeta(x) \le Ct (1 + \abs{x})$.
Hence, we can apply \cref{P:Morrey} with $\zeta$ in place of
$h$ to give
\begin{align*}
&\abs{u_2(s, X_1(s, x)) - u_2(s, X_2(s, x))}
\le C \norm{u_2}_{S_\zeta} \zeta(x)
\ol{\mu} \pr{\abs{X_1(s, x) - X_2(s, x)}/\zeta(x)} \\
&\qquad
\le C \zeta(x) \ol{\mu}(\eta(s)).
\end{align*}
It follows that
\begin{align}\label{e:A1Bound}
\abs{A_1(s, x)}
&\le C \frac{\zeta(x)}{\zeta(x)} \ol{\mu}(\eta(s))
\le C \ol{\mu}(\eta(s)).
\end{align}
To bound $A_2$, we use the Serfati identity, choosing $\ensuremath{\lambda}(x) = h(x)$, to write
\begin{align*}
\abs{A_2(s, x)}
\le \frac{\abs{u_1^0(x) - u_2^0(x)}}{\zeta(x)}
+ A_2^1(s, x) + A_2^2(s, x),
\end{align*}
where
\begin{align*}
A_2^1(s, x)
&=
\frac{1}{\zeta(x)}
\abs{(a_{h(x)} K) * (\omega_1(s) - \omega_2(s))(X_1(s, x))
- (a_{h(x)} K) * (\omega_1^0 - \omega_2^0)(X_1(s, x))}, \\
A_2^2(s, x)
&= \frac{1}{\zeta(x)}
\abs{\int_0^s
\pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp [(1 - a_{h(x)}) K]
\mathop{* \cdot} (u_1 \otimes u_1 - u_2 \otimes u_2)}(r, X_1(r, x))
\, dr}.
\end{align*}
We write,
\begin{align*}
(a_{h(x)} &K) * (\omega_1(s) - \omega_2(s))(X_1(s, x)) \\
&= \int (a_{h(x)} K(X_1(s, x) - z))
(\omega_1^0(X_1^{-1}(s, z)) - \omega_2^0(X_2^{-1}(s, z)))
\, dz \\
&= \int (a_{h(x)} K(X_1(s, x) - z))
(\omega_2^0(X_1^{-1}(s, z)) - \omega_2^0(X_2^{-1}(s, z)))
\, dz \\
&\qquad
+ \int (a_{h(x)} K(X_1(s, x) - z))
(\omega_1^0(X_1^{-1}(s, z)) - \omega_2^0(X_1^{-1}(s, z)))
\, dz.
\end{align*}
Making the two changes of variables,
$z = X_1(s, y)$ and $z = X_2(s, y)$, we can write
\begin{align*}
A_2^1(s, x)
&\le \frac{1}{\zeta(x)}
\abs{\int (a_{h(x)} K(X_1(s, x) - X_1(s, y))
- a_{h(x)} K(X_1(s, x) - X_2(s, y))) \omega_2^0(y) \, dy} \\
&\qquad
+ \frac{\abs{J(s, x)}}{\zeta(x)}.
\end{align*}
We are thus in a position to apply \cref{P:olgKBound} to bound $A_2^1$.
To do so, we set
\begin{align*}
U_j := \set{y \in \ensuremath{\BB{R}}^2 \colon \abs{X_1(s, x) - X_j(s, y)} \le h(x)}
\end{align*}
so that $V := U_1 \cup U_2$ is as in \cref{P:olgKBound}.
Then with $\delta$ as in \cref{e:deltaVDef}, we have
\begin{align}\label{e:deltasBound}
\begin{split}
\delta(s)
&\le \eta(s) \sup_{y \in V} \zeta(y)
\le \eta(s) \zeta(\abs{x} + h(x) + C t \zeta(\abs{x} + h(x))) \\
&= C \eta(s) \zeta \pr{\abs{x} + \zeta(\abs{x})
+ C T \zeta(\abs{x} + \zeta(\abs{x}))}
\le C_1 \eta(s) \zeta(x),
\end{split}
\end{align}
where $C_1 = C(T)$.
Above, we applied \cref{L:FlowBounds} in the second inequality and
the last inequality follows from repeated applications of
\cref{L:preGB} to $\zeta$.
Hence,
\cref{P:olgKBound} gives
\begin{align*}
A_2^1(s, x)
&\le C \smallnorm{\omega^0}_{L^\ensuremath{\infty}}
\frac{h(x)}{\zeta(x)}
\ol{\mu} \pr{\frac{\delta(s)}{h(x)}}
+ \frac{\abs{J(s, x)}}{\zeta(x)}.
\end{align*}
But by \cref{L:omuSubAdditive} (noting that $h(x)/\zeta(x) \le 1$)
and \cref{e:deltasBound},
\begin{align*}
\frac{h(x)}{\zeta(x)}
\ol{\mu} \pr{\frac{\delta(s)}{h(x)}}
& \le \ol{\mu} \pr{\frac{h(x)}{\zeta(x)} \frac{\delta(s)}{h(x)}}
= \ol{\mu} \pr{\frac{\delta(s)}{\zeta(x)}}
\le \ol{\mu}(C_1 \eta(s)).
\end{align*}
Hence,
\begin{align}\label{e:A21Bound}
A_2^1(s, x)
\le C\ol{\mu} \pr{C_1 \eta(s)}
+ \frac{\abs{J(s, x)}}{\zeta(x)}.
\end{align}
\begin{comment}
\begin{remark}\label{R:ToThisPoint}
Up to this point in the proof, we could have used any $zeta$ for which
$\zeta$ grows
as rapidly as linearly
at infinity: the need to apply \cref{L:FlowBounds}
to obtain \cref{e:A1Bound}
limited us to linear growth.
For $A_2^2(s, x)$, though,
we proceed somewhat along the lines of Step 2 of the proof of existence,
and this will constrain $\zeta$ to growing
less than linearly.
\end{remark} \end{comment}
We now bound $A_2^2(x)$. We have,
\begingroup \allowdisplaybreaks
\begin{align*}
A_2^2(s, x)
&\le \frac{C}{\zeta(x)} \int_0^s
\max \bigset{\norm{\frac{u_1}{h}}_{L^\ensuremath{\infty}(0, T) \times \ensuremath{\BB{R}}^2},
\norm{\frac{u_2}{h}}_{L^\ensuremath{\infty}(0, T) \times \ensuremath{\BB{R}}^2}} \\
&\qquad\qquad
\int_{B_{\frac{h(x)}{2}}(X_1(r, x))^C}
\frac{(\zeta h)(y)}{\abs{X_1(r, x) - y}^3}
\frac{\abs{u_1(r, y) - u_2(r, y)}}{\zeta(y)} \, dy \, dr\\
&\le \frac{C}{\zeta(x)}
\int_0^s \pr{\sup_{z \in \ensuremath{\BB{R}}^2}
\frac{\abs{u_1(r, z) - u_2(r, z)}}{\zeta(z)}
\int_{B_{\frac{h(x)}{2}}(X_1(r, x))^C} \frac{(\zeta h)(y)}
{\abs{X_1(r, x) - y}^3} \, dy}
dr \\
&= \frac{C}{\zeta(x)}
\int_0^s \left(Q(r)\int_{B_{\frac{h(x)}{2}}(X_1(r, x))^C} \frac{(\zeta h)(y)}
{\abs{X_1(r, x) - y}^3} \, dy
\right)\, dr.
\end{align*} \endgroup
Because $\zeta h$ is subadditive (being a pre-growth bound\xspace),
letting $w = X_1(r, x)$, we have
\begin{align}\label{e:A22Split}
\begin{split}
\int_{B_{\frac{h(x)}{2}}(w)^C} &\frac{(\zeta h)(y)}
{\abs{w - y}^3} \, dy
\le \int_{B_{\frac{h(x)}{2}}(w)^C} \frac{(\zeta h)(w - y)}
{\abs{w - y}^3} \, dy
+ (\zeta h)(w) \int_{B_{\frac{h(x)}{2}}(w)^C} \frac{1}
{\abs{w - y}^3} \, dy \\
&= 2 \pi H[\zeta h](h(x)/2)
+ C \frac{(\zeta h)(w)}{h(x)/2}
\le C
+ C \zeta (w)
\le C(1 + \zeta(x)).
\end{split}
\end{align}
Here we used that $\zeta h$ is a growth bound\xspace and \cref{L:FlowBounds}.
It follows that
\begin{align}\label{e:A22EarlyBound}
A_2^2(s, x)
&\le C \frac{1 + \zeta(x)}{\zeta(x)} \int_0^s Q(r) \, dr
\le C \int_0^s Q(r) \, dr.
\end{align}
But,
\begin{align*}
Q(r)
&=\sup_{z \in \ensuremath{\BB{R}}^2}
\frac{\abs{u_1(r, z) - u_2(r, z)}}{\zeta(z)}
= \sup_{z \in \ensuremath{\BB{R}}^2}
\frac{\abs{u_1(r, X_1(r, z)) - u_2(r, X_1(r, z))}}
{\zeta(X_1(r, z))} \\
&\le C \sup_{z \in \ensuremath{\BB{R}}^2}
\frac{\abs{u_2(r, X_1(r, z)) - u_2(r, X_2(r, z))}}{\zeta(z)}
+ C \sup_{z \in \ensuremath{\BB{R}}^2}
\frac{\abs{u_2(r, X_2(r, z)) - u_1(r, X_1(r, z))}}{\zeta(z)} \\
&\le C \pr{\ol{\mu}(\eta(r)) + L(r)},
\end{align*}
where we used \cref{L:FlowBounds} in the first inequality
and \cref{e:A1Bound} in the last inequality.
Hence,
\begin{align*}
A_2^2(s, x)
\le C \int_0^s (\ol{\mu}(\eta(r)) + L(r)) \, dr. \end{align*}
It follows from all of these estimates that \begin{align}\label{e:LBoundForUniqueness}
\begin{split}
\eta(t)
&\le \int_0^t L(s) \, ds
= M(t), \\\
L(s)
&\le \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}
+ C \ol{\mu}(C_1 \eta(s))
+ \frac{\abs{J(s, x)}}{\zeta(x)}
+ C \int_0^s (\ol{\mu}(\eta(r)) + L(r)) \, dr \\
&= a(T)
+ C \ol{\mu}(C_1 \eta(s))
+ C \int_0^s (\ol{\mu}(C_1 \eta(r)) + L(r)) \, dr.
\end{split} \end{align}
We therefore have \begin{align}\label{e:MBoundForUniqueness}
\begin{split}
M(t)
&\le t a(T) + \int_0^t
\pr{C \ol{\mu}(C_1 \eta(s))
+ C \int_0^s (\ol{\mu}(C_1 \eta(r)) + L(r)) \, dr}
\, ds \\
&\le t a(T) + C \int_0^t
\pr{\ol{\mu}(C_1 M(s))
+ \int_0^s (\ol{\mu}(C_1 M(r)) + L(r)) \, dr}
\, ds \\
&= t a(T) + C \int_0^t
\pr{\ol{\mu}(C_1 M(s)) + M(s)
+ \int_0^s \ol{\mu}(C_1 M(r)) \, dr}
\, ds \\
&\le t a(T) + C \int_0^t
\pr{(1 + s)\ol{\mu}(C_1 M(s)) + M(s)}
\, ds \\
&\le t a(T) + C \int_0^t
\pr{\ol{\mu}(C_1 M(s)) + M(s)} \, ds,
\end{split} \end{align} where we note that the final $C = C(T)$ increases with $T$. In the second inequality we used $\ol{\mu}$ increasing and $\eta(s) \le M(s)$, while in the third inequality we used that $\ol{\mu}$ and $M$ are both increasing. The bound in \cref{e:LProtoBoundForUniqueness} follows from Osgood's lemma.
We now obtain the bounds on $M(t)$ and $Q(t)$.
The bound on $Q$ we made earlier shows that
\begin{align}\label{e:QBound}
Q(t)
\le C(T) (\ol{\mu}(\eta(t)) + L(t))
\le C(T) (\ol{\mu}(C_1 M(t)) + L(t)),
\end{align}
since $\eta(t) \le M(t)$. Then by \cref{e:LBoundForUniqueness},
\begin{align*}
L(t)
&\le a(T) + C(T) \ol{\mu}(C_1 M(t))
+ C(T) \int_0^t (\ol{\mu}(C_1 M(s)) + L(s)) \, ds \\
&\le a(T) + C(T) \ol{\mu}(C_1 M(t)))
+ C(T) \int_0^t L(s) \, ds.
\end{align*}
Applying Gronwall's inequality,
\begin{align*}
L(t)
\le (a(T) + C \ol{\mu}(C_1 M(t))) e^{C(T) t}.
\end{align*}
Since we can absorb a constant, this same bound holds for $Q(t)$:
\begin{align*}
Q(t)
\le (a(T) + C \ol{\mu}(C_1 M(t))) e^{C(T) t}.
\end{align*}
Hence, we can easily translate a bound on $M$ to a bound on $Q$.
Returning, then, to \cref{e:LProtoBoundForUniqueness},
we have,
for $a(T)$ sufficiently small,
\begin{align*}
\int_{t a(T)}^{M(t)} \frac{ds}{\ol{\mu}(C_1 s)} \,ds
\le C(T) t.
\end{align*}
Integrating gives
\begin{align*}
- \log\log s\big\vert_{C_1 t a(T)}^{C_1 M(t)} \le C(T) t
\end{align*}
from which we conclude that
\begin{align*}
M(t)
\le (t a(T))^{e^{-C(T)t}}.
\end{align*}
This holds as long as $M(t) \le C_1^{-1} e^{-1}$, which gives a bound
on the time $t$.
On the other hand, if $s > C_1^{-1} e^{-1}$ then $\ol{\mu}(C_1 s) = e^{-1}$, so
for $ta(T) > C_1^{-1} e^{-1}$,
\begin{align*}
\int_{t a(T)}^{M(t)} \frac{ds}{\ol{\mu}(C_1 s)} \,ds = \int_{ta(T)}^{M(t)} e \,ds
\le C(T) t.
\end{align*}
Thus,
\begin{align*}
e(M(t) -ta(T))\le C(T) t \leq C(T) t a(T),
\end{align*}
giving
\begin{align*}
M(t)
\le C(T)ta(T).
\end{align*}
(For intermediate values of $a(T)$ we still obtain a usable
bound, it is just more difficult to be explicit.)
\end{proof}
We were able to use a growth bound\xspace $\zeta$ larger than
$h$ and obtain a result for an arbitrary $T$ because, unlike the proof
of existence in \cref{S:Existence}, we are assuming that
we already know that $u_1, u_2$ lies in $S_h$. Hence, the quadratic
term in the Serfati identity can in effect be made linear.
\Ignore{
The terms $A_1$ and $A_2^2$ in the proof above of \cref{T:ProtoUniqueness}
are the controlling terms in the sense that they are the largest.
When $u_1^0 = u_2^0$, so that $J \equiv 0$, the only term involving
the vorticity
is $A_2^1$, which we note is of order $h(x)/\zeta(x) \le 1$.
We could allow the vorticity
to grow at infinity at the rate $\xi(x)$ for a growth bound\xspace $\xi$ for which
$\xi \le C h$ and still obtain a bound on
$A_2^1$ of the same order as $A_1$ and $A_2^2$. If we assume as well
that $h \xi \le C \zeta$ then the bound on $A_2^2$ can be recovered
as well.
The bound on $A_1$,
however, would increase, as an examination of the proof of \cref{P:Morrey}
shows that a factor of $\xi(x)$ would be introduced,
a factor that cannot be compensated for in $A_2^2$.
Finally, in bounding $A_2^2$ we used the subadditivity of $\zeta h$
in the form $(\zeta h)(y) \le (\zeta h)(w - y) + (\zeta h)(w)$.
When $w$ is close to $y$, this bound becomes close
to equality with the $(\zeta h)(w)$ term dominating. And because $h(w) << w$
for $w$ large in integrating over
$B_{h(x)}(w)^C$, $y$ is close to $w$ for the largest values of the integrand.
Hence, it is the second integral in the estimate in \cref{e:A22Split}
that dominates and we conclude that this bound is close to equality
and cannot effectively be improved.
}
\cref{T:ProtoUniqueness} gives a bound on the difference in velocities over time. It remains, however, to characterize $a(T)$ in a useful way in terms of $u_1^0$, $u_2^0$, and $u_1^0 - u_2^0$ and so obtain \cref{T:aT}. This, the subject of the next section, is not as simple as it may seem.
\Ignore{
\ToDo{I think it is worth keeping this lemma and referring to it tangentially at some point, though we can't use it. The point is that the Serfati identity implies uniform convergence of the renormalized Biot-Savart law, but the reverse implication I don't think need hold when $h$ is not constant.} \begin{lemma}\label{L:RenBSBound}
Let $h$ be a well-posedness growth bound\xspace and let $u$ be a solution
in $S_h$ on $[0, T]$ with $u(0) = u^0 \in S_h$.
Then
\begin{align*}
u(t) - u^0
&= \lim_{R \to \ensuremath{\infty}}
(a_R K)*(\omega(t) - \omega^0),
\end{align*}
the convergence being uniform over $[0, T] \times \ensuremath{\BB{R}}^2$. \end{lemma} \begin{proof}
As in \cref{e:VelTermBound}, we have
\begin{align*}
&\abs{u(t, x) - u^0(x) - (a_R K)*(\omega(t) - \omega^0(t))(x)} \\
&\qquad
\le \int_0^t \abs{\pr{\ensuremath{\nabla} \ensuremath{\nabla}^\perp \brac{(1 - a_\ensuremath{\lambda}) K}}
\mathop{* \cdot} (u_1 \otimes u_1 - u_2 \otimes u_2)(s, x)} \, ds \\
&\qquad
\le C \int_0^t \sum_j \norm{g u_j(s)}_{L^\ensuremath{\infty}}^2
\brac{H(\ensuremath{\lambda}(x)) + C \frac{h(x)^2}{\ensuremath{\lambda}(x)}}.
\end{align*}
Choosing $\ensuremath{\lambda}(x) = R h(x)^2$, we have (using $H$ decreasing and $h(x) \ge 1$)
\begin{align*}
&\smallnorm{u(t) - u^0 - (a_R K)*(\omega(t) - \omega^0)}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
\le C T \brac{H(R) + R^{-1}}.
\end{align*}
This gives the result, since $H(R) \to 0$ as $R \to \ensuremath{\infty}$. \end{proof} }
\section{Continuous dependence on initial data}\label{S:ContDep}
\noindent In this section, we prove \cref{T:aT,T:aTSimple}, bounding $a(T)$ of \cref{e:aT}. The difficulty in bounding $a(T)$ lies wholly in bounding $J\slash \zeta$, with $J = J(t, x)$ as in \cref{e:J}. We can write $J = J_2 - J_1$, where \begin{align*}
J_1(t, x)
&= (a_{h(x)} K) * (\omega_1^0 - \omega_2^0)(X_1(t, x)), \\
J_2(t, x)
&= (a_{h(x)} K) * ((\omega_1^0 - \omega_2^0) \circ (X_1^{-1}(t))(x). \end{align*}
Both $J_1/\zeta$ and $J_2/\zeta$ are easy to bound, as we do in \cref{T:aTSimple}, if we assume that $\omega_1^0 - \omega_2^0$ is close in $L^\ensuremath{\infty}$, an assumption that is physically unreasonable, however, as discussed in \cref{S:Introduction}.
\begin{comment} How we handle $J$ depends upon how we suppose that the initial data is changed. Three reasonable options are the following: \begin{enumerate}
\item
To obtain continuous dependence on initial data in $S_h$,
we assume that $u_1^0 - u_2^0$ is ``small'' in $S_h$.
\item
To obtain continuous dependence on initial data in $S_h$,
we assume that $u_1^0 - u_2^0$ is ``small'' in $L^\ensuremath{\infty}$
only.
\item
To obtain control on the effect at a distance of changes to the
initial data, we adapt (1) or (2) by assuming that
$u_1^0 - u_2^0$ is ``small'' in $S_h$ or in $L^\ensuremath{\infty}$ only within a
neighborhood of the origin. \end{enumerate} All three options are treated in \cref{T:aT}. For simplicity, we treat the third option only in the bounded velocity case \ToDo{Extend this!}. \end{comment}
\begin{comment} \begin{proof}[\textbf{Proof of \cref{T:aT}}]
(1): For $j = 1, 2$, we have
\begin{align*}
\norm{J_j/\zeta}_{L^\ensuremath{\infty}}
\le \norm{(a_{h(x)} K)/h}_{L^1}
\smallnorm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}}
\le C \smallnorm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}},
\end{align*}
since $h \le C \zeta$.
\ToDo{This is not correct: either fix or delete this option
from the statement of the theorem.} Hence, by \cref{e:aT}, we have
\begin{align*}
a(T)
\le \smallnorm{(u_1^0 - u_2^0)/\zeta}_{L^\ensuremath{\infty}}
+ C \smallnorm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}}
\le C \smallnorm{u_1^0 - u_2^0}_{S_\zeta},
\end{align*}
giving the first option for $a(T)$.
\noindent (2): Follows directly from \cref{P:JBound}. \end{proof} \end{comment}
\begin{proof}[\textbf{Proof of \cref{T:aTSimple}}]
We have
\begin{align*}
\abs{\frac{J_1(s, x)}{\zeta(x)}}
&\le \norm{(a_{h(x)} K) *
(\omega_1^0 - \omega_2^0)(X_1(s, z))}_{L^\ensuremath{\infty}_z}/\zeta(x) \\
&= \norm{(a_{h(x)} K) *
(\omega_1^0 - \omega_2^0)}_{L^\ensuremath{\infty}}/\zeta(x) \\
&\le \norm{a_{h(x)} K}_{L^1}
\norm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}}/\zeta(x)
\le C (h(x)/\zeta(x)) \norm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}} \\
&\le C \norm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}}.
\end{align*}
Similarly,
\begin{align*}
\abs{\frac{J_2(s, x)}{\zeta(x)}}
&\le \norm{(a_{h(x)} K) *
((\omega_1^0 - \omega_2^0)
\circ X_1^{-1}(s))(x)}_{L^\ensuremath{\infty}_z}/\zeta(x) \\
&\le \norm{a_{h(x)} K}_{L^1}
\norm{(\omega_1^0 - \omega_2^0)(X_1^{-1}(s, z))}
_{L^\ensuremath{\infty}_z}/\zeta(x)
\le C (h(x)/\zeta(x)) \norm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}} \\
&\le C \norm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}}.
\end{align*}
Combined, these two bounds easily yield the bound on $a(T)$. \end{proof}
More interesting is a measure of $a(T)$ in terms of $u_1^0 - u_2^0$ without involving $\omega_1^0 - \omega_2^0$. Now, $J_1$ is fairly easily bounded in terms of $u_1^0 - u_2^0$ (using \cref{L:alaKZBound}) since $X_1(s, x)$ has no effect on the $L^\ensuremath{\infty}$ norm. But in $J_2$, the composition of the initial vorticity with the flow map complicates matters considerably. What we seek is a bound on $a(T)$ of \cref{e:aT} in terms of $\smallnorm{(u_1^0 - u_2^0)/\zeta}_{L^\ensuremath{\infty}}$ and constants that depend upon $\smallnorm{u_1^0}_{S_1}$, $\smallnorm{u_2^0}_{S_1}$. That is the primary purpose of \cref{T:aT}, which we now prove, making forward references to a number of results that appear following its proof. These include \cref{L:ForJ1Bound,P:f0XBound,L:HolderInterpolation}, which employ Littlewood-Paley theory and \Holder spaces of negative index, and which we defer to \cref{S:LP}, where we introduce the necessary technology.
\Ignore{ \begin{definition}\label{D:Holder}
For any $r = k + \ensuremath{\alpha}$
for $k$ a nonnegative integer with $\ensuremath{\alpha} \in (0, 1)$, we define
the \Holder space, $C^r(\ensuremath{\BB{R}}^2) = C^{k, \ensuremath{\alpha}}(\ensuremath{\BB{R}}^2)$, to be the space
of all $k$-times continuously differentiable functions such that
\begin{align*}
\norm{f}_{C^r}
= \norm{f}_{C^{k, \ensuremath{\alpha}}}
:= \sum_{\abs{\gamma} \le k}
\pr{\norm{D^\gamma f}_{L^\ensuremath{\infty}}
+ \sup_{x \ne y}
\frac{\abs{D^\gamma f(x) - D^\gamma f(y)}}
{\abs{x - y}^\ensuremath{\alpha}}
},
\end{align*}
where the sum is over multi-indices, $\gamma$. (Of course,
$C^k(\ensuremath{\BB{R}}^2)$ is defined
the same way, without the $\sup_{x \ne y}$ term in the norm.) \end{definition} }
\begin{remark}\label{R:NegHolder}
In the proof of \cref{T:aT}, we make use for the first time in this paper
of \Holder spaces, with negative and fractional indices.
We are not using the classical
definition of these spaces, but rather one based upon Littlewood-Paley theory.
For non-integer indices, they are equivalent, but the constant of
equivalency (in one direction) blows up as the index approaches an
integer (see \cref{R:HolderWarning}).
Because we will be comparing norms with different indices,
it is important that we use a consistent definition of these spaces.
In this section, the only fact we use regarding \Holder spaces
(in the proof of \cref{L:SmallVelSmallVorticity})
is that $\norm{\dv v}_{C^{r - 1}} \le C \norm{ v}_{C^r}$
for any $r \in (0, 1)$ for a constant $C$ independent of $r$.
For that reason, we defer our definition
of \Holder spaces to \cref{S:LP}. \end{remark}
\begin{remark}\label{R:TechnicalIssue}
With the exception of \cref{P:f0XBound}, versions of all of the
various lemmas and propositions that we use in the proof
of \cref{T:aT} can be obtained for solutions in $S_h$
for any well-posedness growth bound\xspace $h$.
Should a way be found to also extend \cref{P:f0XBound} to
$S_h$ then a version of \cref{T:aT}
would hold for $S_h$ as well.
\end{remark}
\begin{proof}[\textbf{Proof of \cref{T:aT}}]
Let $\omega^0_1$, $\omega^0_2$
be the initial vorticities,
and let $\omega_1, \omega_2$
and $X_1, X_2$ be the vorticities and flow maps
of $u_1$, $u_2$.
To bound $a(T)$, let
$\overline{\omega}_0 = \omega_1^0 - \omega_2^0 = \curl (u_1^0 - u_2^0)$.
Then, since $h \equiv 1$, we can write,
\begin{align*}
\frac{J_1(s, x)}{\zeta(x)}
&= \frac{\zeta(X_1(s, x))}{\zeta(x)}
\frac{\brac{(a K) * \curl (u_1^0 - u_2^0)}(X_1(s, x))}
{\zeta(X_1(s, x))}, \\
\frac{J_2(s, x)}{\zeta(x)}
&= \frac{(a K) *
(\overline{\omega}_0 \circ X_1^{-1}(s))(x)}{\zeta(x)}.
\end{align*}
Applying \cref{L:preGB}, \cref{L:FlowBounds},
and \cref{L:alaKZBound}, we see that
\begin{align}\label{e:J1SecondaBound}
\frac{J_1(s, x)}{\zeta(x)}
&\le
C \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}},
\end{align}
the bound holding uniformly over $s \in [0, T]$.
Bounding $J_2$ is much more difficult, because the flow map appears
inside the convolution, which prevents us from writing it as the
curl of a divergence-free
vector field. Instead, we apply a sequence of bounds,
starting with
\begin{align*}
\abs{\frac{J_2(s, x)}{\zeta(x)}}
&=
\abs{\frac{(\zeta \circ X_1^{-1}(s))(x)}{\zeta(x)}}
\abs{\frac{(a K) *
(\overline{\omega}_0 \circ X_1^{-1}(s))(x)}
{(\zeta \circ X_1^{-1}(s))(x)}} \\
&\le C
\abs{\frac{(a K) *
(\overline{\omega}_0 \circ X_1^{-1}(s))(x)}
{(\zeta \circ X_1^{-1}(s))(x)}} \\
&\le C \Phi_\ensuremath{\alpha} \pr{s,
\norm{\frac{\overline{\omega}_0}{\zeta} \circ X_1^{-1}(s, \cdot)}
_{C^{-\ensuremath{\alpha}}}
}
\end{align*}
for all $s \in [0, T]$.
Here we used \cref{L:FlowBounds} and
applied \cref{L:ForJ1Bound} with $f = \overline{\omega}_0 \circ X_1^{-1}(s)$.
Applying \cref{P:f0XBound} with $\alpha = \delta_t$
and $\beta = 2 C(\delta)$,
followed by \cref{L:SmallVelSmallVorticity} gives
\begin{align*}
\abs{\frac{J_2(s, x)}{\zeta(x)}}
&\le C \Phi_\ensuremath{\alpha} \pr{s,
2 \norm{\frac{\overline{\omega}_0}{\zeta}}_{C^{-\delta}}
}
\le
C \Phi_\ensuremath{\alpha} \pr{s,
2
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{C^{1 - \delta}}
}
\end{align*}
for all $s \in [0, T^*]$.
Note that the condition on $\delta_{T^*}$
in \cref{P:f0XBound} is satisfied
because of our definition of $T^*$
and because $\norm{u}_{LL} \le C \norm{u}_{S_1}$, which
follows from \cref{P:Morrey}.
We also used, and use again below, that $\Phi_\ensuremath{\alpha}$ is increasing
in its second argument.
We apply \cref{L:HolderInterpolation} with $r = 1 - \delta$, obtaining
\begin{align*}
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{C^{1 - \delta}}
&C \le \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}^\delta
\brac{\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}^{1 - \delta}
+ \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{S_1}^{1 - \delta}
}.
\end{align*}
But,
\begin{align*}
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{S_1}
&\le \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^{\infty}}
+ \norm{\frac{1}{\zeta}}_{L^\ensuremath{\infty}}
\norm{\omega_1^0 - \omega_2^0}_{L^\ensuremath{\infty}}
+ \norm{\ensuremath{\nabla}\pr{\frac{1}{\zeta}}}_{L^\ensuremath{\infty}}
\norm{u_1^0 - u_2^0}_{L^\ensuremath{\infty}} \\
&\le C \norm{u_1^0 - u_2^0}_{S_1}
\le C,
\end{align*}
where we used the identity,
$\curl (f u) = f \curl u - \ensuremath{\nabla} f \cdot u^\perp$,
and that $1/\zeta$ is Lipschitz (though $1/\zeta \notin C^1(\ensuremath{\BB{R}}^2)$
unless $\zeta$ is constant, because
$\ensuremath{\nabla} \zeta$ is not defined at the origin).
Therefore,
\begin{align*}
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{C^{1 - \delta}}
&\le C \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}^\delta.
\end{align*}
We conclude that
\begin{align*}
\abs{\frac{J_2(s, x)}{\zeta(x)}}
\le
C_1 \Phi_\ensuremath{\alpha} \pr{T,
C
\norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}^\delta
}
\end{align*}
for all $0 \le s \le T^*$.
Since the bound on $J_1/\zeta$ in \cref{e:J1SecondaBound} is
better than that on $J_2/\zeta$, this completes the proof. \end{proof}
\begin{remark}
In the application of \cref{P:f0XBound} and \cref{L:zetaFlowBound} (which was used in the proof of \cref{L:ForJ1Bound}) the value of $C_0 = \norm{u_1}_{L^\ensuremath{\infty}(0, T; S_1)}$ enters into the constants. A bound on $\norm{u_1}_{L^\ensuremath{\infty}(0, T; S_1)}$ comes from the proof of existence in \cref{S:Existence}. While we did not explicitly calculate it, for $S_1$ it yields an exponential-in-time bound, as in \cite{Serfati1995A,AKLN2015}. Hence, our bound on $a(T)$ is doubly exponential (it would be worse for unbounded velocities). It is shown in \cite{Gallay2014} (extending \cite{Zelik2013}) for bounded velocity, however, that $\norm{u_1}_{L^\ensuremath{\infty}(0, T; S_1)}$ can be bounded linearly in time, which means that $C_0$ actually only increases singly exponentially in time.
Whether an improved bound can be obtained for a more general $h$ is an open question: If it could, it would extend the time of existence of solutions, possibly expanding the class of growth bounds\xspace for which global-in-time existence holds. \end{remark}
\begin{lemma}\label{L:CurlConv}
Let $Z \in S_1$.
For any $\ensuremath{\lambda} > 0$,
\begin{align}\label{e:CurlConv}
(a_\ensuremath{\lambda} K) * \curl Z
= \curl (a_\ensuremath{\lambda} K) * Z.
\end{align} \end{lemma} \begin{proof}
Note that
$
((a_\ensuremath{\lambda} K) * \curl Z)^i
= (a_\ensuremath{\lambda} K^i) * (\ensuremath{\partial}_1 Z^2 - \ensuremath{\partial}_2 Z^1)
= (\ensuremath{\partial}_1 (a_\ensuremath{\lambda} K^i)) * Z^2 - (\ensuremath{\partial}_2 (a_\ensuremath{\lambda} K^i)) * Z^1
$.
Thus, \cref{e:CurlConv} is not just a matter of moving the curl from
one side of the convolution to the other.
Using both that $Z$ is divergence-free and that $a$ is radially
symmetric, however, \cref{e:CurlConv} is
proved in Lemma 4.4 of \cite{KBounded2015}. \end{proof}
The following is a twist on Proposition 4.6 of \cite{BK2015}. \begin{lemma}\label{L:alaKZBound}
Let $\zeta$ be a pre-growth bound\xspace
and suppose that $Z \in S_1$.
For any $\ensuremath{\lambda} > 0$,
\begin{align}\label{e:alaKZBound}
\begin{split}
\norm{(a_\ensuremath{\lambda} K) * \curl Z}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
&\le 2 \norm{Z}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}, \\
\norm{\frac{(a_\ensuremath{\lambda} K) * \curl Z}{\zeta}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
&\le \pr{ 1 +6 \frac{\zeta(\ensuremath{\lambda})}{\zeta(0)}}
\norm{\frac{Z}{\zeta}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}.
\end{split}
\end{align} \end{lemma} \begin{proof} \cref{L:CurlConv} gives $
(a_\ensuremath{\lambda} K) * \curl Z
= \curl (a_\ensuremath{\lambda} K) * Z $. But $\curl (a_\ensuremath{\lambda} K) = - \dv(a_\ensuremath{\lambda} K^\perp) = - a_\ensuremath{\lambda} \dv K^\perp - \ensuremath{\nabla} a_\ensuremath{\lambda} \cdot K^\perp = \delta - \ensuremath{\nabla} a_\ensuremath{\lambda} \cdot K^\perp$, where $\delta$ is the Dirac delta function, since $a_\ensuremath{\lambda}(0) = 1$. Hence, \begin{align}\label{e:KernelCurlCommute}
(a_\ensuremath{\lambda} K) * \curl Z
&= Z - (\ensuremath{\nabla} a_\ensuremath{\lambda} \cdot K^\perp)* Z
= Z - \varphi_\ensuremath{\lambda}* Z, \end{align} where $\varphi_\ensuremath{\lambda} := \ensuremath{\nabla} a_\ensuremath{\lambda} \cdot K^\perp \in C_c^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)$. Then \cref{e:alaKZBound}$_1$ follows from \begin{align*}
\norm{(a_\ensuremath{\lambda} K) * \curl Z}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
&\le C \pr{1 + \norm{\varphi_\ensuremath{\lambda}}_{L^1}} \norm{Z}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
= 2 \norm{Z}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}. \end{align*} Here, we have used that $\norm{\varphi_\ensuremath{\lambda}}_{L^1} = 1$, as can easily be verified by integrating by parts. (In fact, $\varphi_\ensuremath{\lambda} *$ is a mollifier, though we will not need that.)
Using \cref{e:KernelCurlCommute}, we have \begin{align*}
\norm{\frac{(a_\ensuremath{\lambda} K) * \curl Z}{\zeta}}_{L^\ensuremath{\infty}}
&\le \norm{\frac{Z}{\zeta}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
+ \norm{\frac{\varphi_\ensuremath{\lambda} * Z}{\zeta}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}. \end{align*} But $\varphi_\ensuremath{\lambda}(x-\cdot)$ is supported in $B_\ensuremath{\lambda}(x)$, so \begin{align*}
\abs{\frac{\varphi_\ensuremath{\lambda} * Z(x)}{\zeta(x)}}
&= \abs{\frac{\varphi_\ensuremath{\lambda} *
(\CharFunc_{B_\ensuremath{\lambda}(x)} Z)(x)}{\zeta(x)}}
\le \frac{\norm{\varphi_\ensuremath{\lambda}}_{L^1}
\norm{\CharFunc_{B_\ensuremath{\lambda}(x)} Z}_{L^\ensuremath{\infty}}}{\zeta(x)}
= \frac{\norm{\CharFunc_{B_\ensuremath{\lambda}(x)} Z}_{L^\ensuremath{\infty}}}{\zeta(x)} \\
&\le \norm{\frac{Z}{\zeta}}_{L^\ensuremath{\infty}}
\frac{\zeta(\abs{x} + \ensuremath{\lambda})}
{\zeta \pr{\max \set{\abs{x} - \ensuremath{\lambda}, 0}}}. \end{align*} Taking the supremum over $x \in \ensuremath{\BB{R}}^2$, we have \begin{align*}
\norm{\frac{(a_\ensuremath{\lambda} K) * \curl Z}{\zeta}}_{L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)}
&\le \norm{\frac{Z}{\zeta}}_{L^\ensuremath{\infty}}\pr{ 1 +
\sup_{x \in \ensuremath{\BB{R}}^2}
\frac{\zeta(\abs{x} + \ensuremath{\lambda})}
{\zeta\pr{\max \set{\abs{x} - \ensuremath{\lambda}, 0}}} }. \end{align*}
If $\abs{x} > 2 \ensuremath{\lambda}$ then using \cref{L:preGB}, \begin{align*}
\frac{\zeta(\abs{x} + \ensuremath{\lambda})}{\zeta\pr{\max \set{\abs{x} - \ensuremath{\lambda}, 0}}}
\le \frac{\zeta(3 \abs{x}/2)}{\zeta(\abs{x}/2)}
\le 6 \frac{\zeta(\abs{x}/2)}{\zeta(\abs{x}/2)}
= 6, \end{align*} while if $\abs{x} \le 2 \ensuremath{\lambda}$ then \begin{align*}
\frac{\zeta(\abs{x} + \ensuremath{\lambda})}{\zeta\pr{\max \set{\abs{x} - \ensuremath{\lambda}, 0}}}
\le \frac{\zeta(3 \ensuremath{\lambda})}{\zeta(0)}
\le 6 \frac{\zeta(\ensuremath{\lambda})}{\zeta(0)}. \end{align*} Hence, we obtain \cref{e:alaKZBound}$_2$. \end{proof}
\begin{lemma}\label{L:SmallVelSmallVorticity}
Let $\zeta$ be a pre-growth bound\xspace, $\ensuremath{\alpha} \in (0, 1)$, and
$\overline{\omega}_0 = \omega_1^0 - \omega_2^0 = \curl(u_1^0 - u_2^0)$
with $u_1^0$, $u_2^0 \in S_\zeta$. Then
\begin{align*}
\norm{\frac{\overline{\omega}_0}{\zeta}}_{C^{\ensuremath{\alpha} - 1}}
\le C \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{C^\ensuremath{\alpha}}.
\end{align*} \end{lemma} \begin{proof}
Let $g := 1/\zeta$ and
$v = -(u_1^0 - u_2^0)$ so that $\overline{\omega}_0 = \dv v^\perp$.
Then
\begin{align*}
g \overline{\omega}_0
= g \dv v^\perp
= \dv (g v^\perp) - \ensuremath{\nabla} g \cdot v^\perp.
\end{align*}
Thus (see \cref{R:NegHolder}),
\begin{align*}
&\norm{g \overline{\omega}_0}_{C^{\ensuremath{\alpha} - 1}}
\le C \pr{\smallnorm{g v^\perp}_{C^\ensuremath{\alpha}}
+ \smallnorm{\ensuremath{\nabla} g \cdot v^\perp}_{C^{\ensuremath{\alpha}-1}}}
= C \pr{\norm{g v}_{C^\ensuremath{\alpha}}
+ \smallnorm{\ensuremath{\nabla} g \cdot v^\perp}_{C^{\ensuremath{\alpha}-1}}}.
\end{align*}
But by virtue of \cref{L:fhzetaacts}, we also have
\begin{align*}
\smallnorm{\ensuremath{\nabla} g \cdot v^\perp}_{C^{\ensuremath{\alpha}-1}}
\le \smallnorm{\ensuremath{\nabla} g \cdot v^\perp}_{L^{\infty}}
\le c_0 \smallnorm{g v}_{L^{\infty}}
\le c_0\smallnorm{g v}_{C^{\ensuremath{\alpha}}}.
\end{align*} \end{proof}
\Ignore{ \begin{lemma}\label{L:PhiSubmultiplicative}
Let $\ensuremath{\alpha}, \beta > 0$ and define $\Phi_\ensuremath{\alpha}$
as in \cref{e:Phialpha}.
For all $\ensuremath{\alpha} > 0$ and all $a, x \in [0, \ensuremath{\infty})$,
\begin{align*}
\Phi_\ensuremath{\alpha}(t, ax) \le \Phi_\ensuremath{\alpha}(t, a) \Phi_\ensuremath{\alpha}(t, x).
\end{align*} \end{lemma} \begin{proof}
Let $\beta_t := \frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}$ so that
$\Phi_\ensuremath{\alpha}(t, x) := x + x^{\beta_t}$. Then
\begin{align*}
\Phi_\ensuremath{\alpha}(t, ax)
= ax + (ax)^{\beta_t}
\le (a + a^{\beta_t}) (x + x^{\beta_t})
= \Phi_\ensuremath{\alpha}(t, a) \Phi_\ensuremath{\alpha}(t, x).
\end{align*} \end{proof} }
\Ignore { \begin{lemma}\label{L:HolderInterpolation}
For any $f \in LL(\ensuremath{\BB{R}}^2)$ (see \cref{R:LL})
and any $0 < r < a < 1$,
\begin{align*}
\norm{f}_{C^r}
\le
\norm{f}_{L^\ensuremath{\infty}}
+
\frac{2 a}{e(a - r)} \norm{f}_{L^\ensuremath{\infty}}^{1 - a}
\norm{f}_{\dot{LL}}^{a}.
\end{align*} \end{lemma} \begin{proof}
We can write $\norm{f}_{C^r} = \norm{f}_{L^\ensuremath{\infty}}
+ \norm{f}_{\dot{C}^r}$, where
\begin{align*}
\norm{f}_{\dot{C}^r}
:= \sup_{\substack{x, y \in \ensuremath{\BB{R}}^2 \\ x \ne y}}
\frac{\abs{f(x) - f(y)}}{\abs{x - y}^r}.
\end{align*}
Let $\delta \in (0, 1)$. Then
\begin{align*}
\norm{f}_{\dot{C}^\delta}
&\le \sup_{\substack{x, y \in \ensuremath{\BB{R}}^2 \\ x \ne y}}
\frac{\ol{\mu}(\abs{x - y})}{\abs{x - y}^\delta}
\frac{\abs{f(x) - f(y)}}{\ol{\mu}(\abs{x - y})}
\le \norm{\rho_\delta}_{L^\ensuremath{\infty}}
\sup_{\substack{x, y \in \ensuremath{\BB{R}}^2 \\ x \ne y}}
\frac{\abs{f(x) - f(y)}}{\ol{\mu}(\abs{x - y})},
\end{align*}
where $\rho_\delta(x) = \ol{\mu}(x) x^{-\delta}$.
To determine the $L^\ensuremath{\infty}$ norm of $\rho_\delta$, first note that
for $x \ge 1/e$, we have
$\rho_\delta(x) \le e^{-1} e^{-\delta} \le e^{-1}$. On $(0, e^{-1})$, we have
$\rho_\delta(x) = - x^{1 - \delta} \log x$, so
\begin{align*}
\rho_\delta'(x)
= - (1 - \delta)x^{- \delta} \log x - x^{- \delta}
= - x^{- \delta} \pr{(1 - \delta) \log x + 1}.
\end{align*}
Hence, the derivative changes sign from positive on $(0, x_0)$
to negative on $(x_0, e^{-1})$, where $x_0$ is such that
$\log x_0 = -1/(1 - \delta)$. We conclude that $\rho_\delta$ reaches a maximum
at $x_0 = \exp(-1/(1 - \delta))$, which we note lies in $(0, e^{-1})$,
with a maximum value of
\begin{align*}
\rho_\delta(x_0)
&= -\pr{e^{-\frac{1}{1 - \delta}}}^{1 - \delta} \log x_0
= \frac{1}{e(1 - \delta)}.
\end{align*}
We conclude that
\begin{align*}
\norm{f}_{\dot{C}^\delta}
\le \frac{1}{e(1 - \delta)} \norm{f}_{\dot{LL}}.
\end{align*}
Now fix $a \in (r, 1)$ and let $\delta = \frac{r}{a}$. Then
\begin{align*}
\frac{\abs{f(x) - f(y)}}{\abs{x - y}^r}
&= \abs{f(x) - f(y)}^{1 - a}
\pr{\frac{\abs{f(x) - f(y)}}{\abs{x - y}^{\frac{r}{a}}}}^a
= \abs{f(x) - f(y)}^{1-a} \norm{f}_{\dot{C}^{\frac{r}{a}}}^a
\\
&\le \frac{2 a}{e(a - r)}
\norm{f}^{1 - a}_{L^{\ensuremath{\infty}}} \norm{f}_{\dot{LL}}^a,
\end{align*}
from which the result follows. \end{proof} }
\Ignore{ \ToDo{Maybe turn what follows, shortened, into a comment somewhere, as I have at least twice now fooled myself into thinking it should work and tremendously simplify things. Point out why it does not.}
We decompose $A_2^1(s, x)$ as
\begin{align*}
A_2^1(s, x)
&= \abs{\frac{J_2(s, x)}{\zeta(x)} - \frac{J_1(s, x)}{\zeta(x)}},
\end{align*}
where
\begin{align*}
J_2(s, x)
&=
\frac{\brac{(a_{h(x)} K) * (\omega_1(s) - \omega_2(s))}(X_1(s, x))}
{\zeta(x)}, \\
J_1(s, x)
&= \frac{\brac{(a_{h(x)} K) * (\omega_1^0 - \omega_2^0)}(X_1(s, x))}
{\zeta(x)}.
\end{align*}
But, $\omega_1(s) - \omega_2(s) = \curl (u_1(s) - u_2(s)) = \dv v(s)$,
where $v = (u_1 - u_2)^\perp$, so, also setting $v^0 = v(0)$,
\begin{align*}
J_2(s, x)
&=
\frac{\zeta(X_1(s, x))}{\zeta(x)}
\frac{\brac{(a_{h(x)} K) * \dv v(s)}(X_1(s, x))}
{\zeta(X_1(s, x))}, \\
J_1(s, x)
&= \frac{\zeta(X_1(s, x))}{\zeta(x)}
\frac{\brac{(a_{h(x)} K) * \dv v^0}(X_1(s, x))}
{\zeta(X_1(s, x))}.
\end{align*}
Applying \cref{L:preGB}, \cref{L:FlowBounds},
and \cref{L:alaKZBound}, we see that
\begin{align*}
\abs{J_2(s, x)}
&\le
C \norm{\frac{u_1(s) - u_2(s)}{\zeta}}_{L^\ensuremath{\infty}}, \\
\abs{J_1(s, x)}
&\le
C \norm{\frac{u_1^0 - u_2^0}{\zeta}}_{L^\ensuremath{\infty}}
\end{align*}
for all $s \in [0, T]$. }
\begin{comment}
\section{Some \Holder space bounds}
\noindent
The following lemma is an adaptation of Proposition A.3 of \cite{BK2015}. \begin{lemma}\label{L:f0etaBound}
Assume that $f_0 \in C^{-\delta}(\ensuremath{\BB{R}}^2) \cap L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)$
for some $\delta > 0$.
Let $u \in L^\ensuremath{\infty}(0, T; S_1)$
and let $X$ be the corresponding flow map.
Then letting $\ensuremath{\alpha} = \gamma \delta + 2 (\gamma - 1) > \delta$,
with $\gamma = \gamma(t)$ as in the statement of \cref{T:aT},
we have, for all $t < C_0^{-1}$,
\begin{align*}
\norm{f_0 \circ X^{-1}}_{C^{-\ensuremath{\alpha}}}
\le C \gamma(t) \norm{f_0}_{C^{-\delta}}.
\end{align*} \end{lemma} \begin{proof} Let $f = f_0 \circ X^{-1}$. For any Schwartz class function, $\phi$, we have \begin{align*}
\begin{split}
\innp{f, \phi}
&= \innp{f_0 \circ X^{-1}, \phi}
= \int_{\ensuremath{\BB{R}}^2} f_0(X^{-1}(x)) \phi(x) \, dx \\
&= \int_{\ensuremath{\BB{R}}^2} f_0(y) \phi(X(y)) \abs{\det \ensuremath{\nabla} X(y)} \, dy
= \innp{f_0, \phi \circ X \abs{\det \ensuremath{\nabla} X}} \\
&= \innp{f_0, \phi \circ X},
\end{split} \end{align*} where we used that $\det \ensuremath{\nabla} X(y) = 1$, because $X$ is measure-preserving.
By \cref{L:BesovXBound}, for any Schwartz class function, $\phi$, \begin{align}\label{e:phietaBound}
\norm{\phi \circ X}_{B^\delta_{1, 1}}
&\le C \gamma(t)
\norm{\phi}_{B^\ensuremath{\alpha}_{1, 1}}. \end{align} Letting $Q^\delta_{1, 1}$ be the set of all Schwartz class functions lying in $B^\delta_{1, 1}$ having norm $ \le 1$, Lemma A.4 of \cite{BK2015} (which relies upon Proposition 2.76 of \cite{BahouriCheminDanchin2011}) then gives
\begin{align*}
\begin{split}
\norm{f}_{C^{-\ensuremath{\alpha}}}
&= \norm{f}_{B^{-\ensuremath{\alpha}}_{\ensuremath{\infty}, \ensuremath{\infty}}}
= C \sup_{\phi \in Q^\ensuremath{\alpha}_{1, 1}}
\innp{f, \phi}
= C \sup_{\phi \in Q^\ensuremath{\alpha}_{1, 1}}
\innp{f_0, \phi \circ X} \\
&\le C \norm{f_0}_{C^{-\delta}}
\sup_{\phi \in Q^\ensuremath{\alpha}_{1, 1}}
\norm{\phi \circ X}_{B^\delta_{1, 1}} \\
&\le C \gamma(t) \norm{f_0}_{C^{-\delta}}
\sup_{\phi \in Q^\ensuremath{\alpha}_{1, 1}}
\norm{\phi}_{B^\ensuremath{\alpha}_{1, 1}}
= C \gamma(t)\norm{f_0}_{C^{-\delta}}.
\end{split} \end{align*}
\end{proof}
\begin{lemma}\label{L:BesovXBound}
Fix $\delta \in (0, 1)$. Then there exists a constant
$C > 0$ such that for all $t \in [0, T]$,
\begin{align*}
\norm{\phi \circ X(t)}_{B^\delta_{1, 1}}
\le C \gamma(t) \norm{\phi}_{B^{\ensuremath{\alpha}(t)}_{1, 1}}.
\end{align*} \end{lemma} \begin{proof} We have, \begin{align*}
I(r)
:= \int_{\ensuremath{\BB{R}}^2} &\int_{\abs{h} \le r}
\abs{\Delta_h^1 (\phi \circ X)(x)} \, dh \, dx
= \int_{\ensuremath{\BB{R}}^2} \int_{\abs{h} \le r}
\abs{\phi (X(x + h)) - \phi(X(x))} \, dh \, dx.
\end{align*} Making the measure-preserving change of variables, $h \mapsto z = X(x + h) - X(x)$, we have \begin{align*}
I(r)
&= \int_{\ensuremath{\BB{R}}^2} \int_{\abs{X^{-1}(X(x) + z) - x} \le r}
\abs{\phi (X(x) + z) - \phi(X(x))} \, dz \, dx. \end{align*} Making another measure-preserving change of variables, $w = X(x)$, \begin{align*}
I(r)
&= \int_{\ensuremath{\BB{R}}^2} \int_{\abs{X^{-1}(w + z) - X^{-1}(w)} \le r}
\abs{\phi (w + z) - \phi(w)} \, dz \, dw. \end{align*} By \cref{L:FlowUpperSpatialMOC}, \begin{align*}
\abs{X^{-1}(w + z) - X^{-1}(w)}
\le C \abs{z}^{\gamma(t)}, \end{align*} which we note holds for all $\abs{z} \le R$ for any fixed $R \ge 0$ (though $C = C(R)$ \ToDo{details needed}). Hence, $\abs{X^{-1}(w + z) - X^{-1}(w)} \le r$ will hold if $C \abs{z}^{\gamma(t)} \le r$; that is, if \begin{align*}
\abs{z}
\le C^{\frac{1}{\gamma(t)}} r^{\frac{1}{\gamma(t)}}
\le C r^{\frac{1}{\gamma(t)}}. \end{align*} Hence, \begin{align*}
I(r)
&\le \int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le C r^{\frac{1}{\gamma(t)}}}
\abs{\phi (w + z) - \phi(w)} \, dz \, dw. \end{align*} \ToDo{The presence of the factor of $C$ in $\abs{z} \le C r^{\frac{1}{\gamma(t)}}$ is a little annoying and has to be dealt with.} Hence, \begin{align*}
\int_0^1 r^{-\delta - 3} I(r) dr
&\le \int_0^1 r^{-\delta - 3}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le C r^{\frac{1}{\gamma(t)}}}
\abs{\phi (w + z) - \phi(w)} \, dz \, dw
\, dr \\
&= \int_0^1 r^{-\frac{\delta + 3}{\gamma(t)}}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le C r^{\frac{1}{\gamma(t)}}}
\abs{\phi (w + z) - \phi(w)} \, dz \, dw
\, dr. \end{align*} Now make the change of variables, $s = C r^{1/\gamma(t)}$. Thus, to within a constant multiple, $r = s^{\gamma(t)}$ so $dr = \gamma(t) s^{\gamma(t) - 1} \, ds$ and $r^{-\delta - 3} = s^{-\delta \gamma(t) - 3 \gamma(t)}$. This gives \begin{align*}
r^{-\delta - 3} \, dr
&= s^{-\delta \gamma(t) - 3 \gamma(t)} s^{\gamma(t) - 1} \, ds
= s^{-\delta \gamma(t) - 2 \gamma(t) - 1} \, ds
= s^{-\delta \gamma(t) - 2 \gamma(t) + 2 - 3} \, ds
= s^{-\ensuremath{\alpha} - 3}. \end{align*} Hence, \begin{align*}
\int_0^1r^{-\delta - 3} I(r) \, dr
&\le \gamma(t) \int_0^1 s^{-\ensuremath{\alpha} - 3}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le s}
\abs{\phi (w + z) - \phi(w)} \, dz \, dw
\, ds \\
&= \gamma(t) \int_0^1 s^{-\ensuremath{\alpha} - 3}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le s}
\abs{\Delta^1_z \phi(w)} \, dz \, dw
\, ds. \end{align*} By \cref{L:BesovByFiniteDifferences}, then, as long as $\ensuremath{\alpha}(t) < 1$, \begin{align*}
\norm{\phi \circ X(t)}_{B^\delta_{1, 1}}
&\le \norm{\phi \circ X}_{L^1}
+ \int_0^1r^{-\delta - 3} I(r) \, dr \\
&= \norm{\phi}_{L^1}
+ \gamma(t) \int_0^1 s^{-\ensuremath{\alpha} - 3}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le s}
\abs{\Delta^1_z \phi (w)} \, dz \, dw
\, ds \\
&\le C \gamma(t) \norm{\phi}_{B^{\ensuremath{\alpha}(t)}_{1, 1}}, \end{align*} since $\norm{\phi \circ X}_{L^1} = \norm{\phi}_{L^1}$. This gives the result for $\ensuremath{\alpha}(t) < 1$. \ToDo{Deal with the higher differences. Probably impossible!}
Now suppose that $\ensuremath{\alpha}(t) \in [1, 2)$. Then \cref{L:BesovByFiniteDifferences} gives \begin{align*}
\norm{\phi \circ X}_{B^\delta_{1, 1}}
:= \norm{f}_{L^1}
+ \int_0^1 r^{-\delta - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\abs{h} \le r}
\abs{\Delta_h^2 (\phi \circ X)(x)} \, dh \, dx \, dr. \end{align*} Now, \begin{align*}
\Delta_h^2 (\phi \circ X)(x)
= \phi(X(x + 2h)) - 2 \phi(X(x + h)) + \phi(X(x)). \end{align*} We make the same two measure-preserving changes of variables, $h \mapsto z = X(x + h) - X(x)$ and $w = X(x)$, as before, so that $X(x + h) = w + z$, we can write, \begin{align*}
\Delta_h^2 (\phi \circ X)(x)
&= \phi(w + 2z) - 2 \phi(w + z) + \phi(w)
+ (\phi(A(z, w)) - \phi(w + 2z)) \\
&= \Delta^2_z \phi(w) + \phi(X(x + 2h)) - \phi(w + 2z)) \\
&= \Delta^2_z \phi(w) + \phi(X(x + 2h)) - \phi \pr{2 X(x + h) - X(x))}. \end{align*} Integrating as we did above for $\ensuremath{\alpha} < 1$, though only making the change of variables for the $\Delta^2_z \phi(w)$ term, we now obtain \begin{align*}
\norm{\phi \circ X(t)}_{B^\delta_{1, 1}}
&\le \norm{\phi}_{L^1}
+ \gamma(t) \int_0^1 s^{-\ensuremath{\alpha} - 3}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le s}
\abs{\Delta^2_z \phi (w)} \, dz \, dw
\, ds
+ R, \end{align*} where \begin{align*}
R
= \int_0^1 r^{-\delta - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\abs{h} \le r}
\abs{\phi(X(x + 2h)) - \phi\pr{2 X(x + h) - X(x))}} \, dh \, dx \, dr. \end{align*} Now, \begin{align*}
\abs{\phi(X(x + 2h)) - \phi\pr{2 X(x + h) - X(x))}}
&\le \end{align*}
::::::
\begin{align*}
\abs{\phi (w + z) - \phi(w)}
= \abs{z} \abs{\int_0^1 \ensuremath{\nabla} \phi(w + uz) \cdot \frac{z}{\abs{z}} \, du}
\le \abs{z} \int_0^1 \abs{\ensuremath{\nabla} \phi(w + uz)} \, du \end{align*} so \begin{align*}
\int_0^1 s^{-\ensuremath{\alpha} - 3}
&\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le s}
\abs{\phi (w + z) - \phi(w)} \, dz \, dw \, ds \\
&\le \int_0^1 s^{-\ensuremath{\alpha} - 3}
\int_{\ensuremath{\BB{R}}^2}
\int_{\abs{z} \le s}
\int_0^1
\abs{z}
\abs{\ensuremath{\nabla} \phi(w + uz)} \, du \, dz \, dw \, ds \\
&= \int_0^1 s^{-\ensuremath{\alpha} - 3}
\int_0^1
\int_{\abs{z} \le s}
\abs{z}
\int_{\ensuremath{\BB{R}}^2}
\abs{\ensuremath{\nabla} \phi(w + uz)} \, dw \, dz \, du \, ds \\
&=
\norm{\ensuremath{\nabla} \phi}_{L^1}
\int_0^1 s^{-\ensuremath{\alpha} - 3}
\frac{s^2}{2} \, ds \\
&=
\frac{\norm{\ensuremath{\nabla} \phi}_{L^1}}{2}
\int_0^1 s^{-\ensuremath{\alpha} - 1} \, ds
= \ensuremath{\infty}. \end{align*} \ToDo{This, of course, gets us nowhere!} \end{proof}
\Ignore{ \ToDo{If the Triebel-approach works, don't need this lemma.} \begin{lemma}\label{L:HomoVersusNonHomoBesov}
Let $\phi \in B^s_{1, 1}$ for some $s > 0$. Then
\begin{align*}
\norm{\phi}_{B^s_{1, 1}}
&\le C \norm{\phi}_{L^1} + \norm{\phi}_{\dot{B}^s_{1, 1}}, \\
\norm{\phi}_{\dot{B}^s_{1, 1}}
&\le C \norm{\phi}_{L^1} + \norm{\phi}_{B^s_{1, 1}}, \\
\end{align*} \end{lemma} \begin{proof}
Using the notation of \cite{BahouriCheminDanchin2011} (p. 61), we have
\begin{align*}
\medabs{\norm{\phi}_{B^s_{1, 1}} - \smallnorm{\dot{\phi}}_{B^s_{1, 1}}}
&= \abs{\sum_{j = -\ensuremath{\infty}}^{-1} 2^{js} \smallnorm{\dot{\Delta}_j \phi}_{L^1}
- \norm{\Delta_{-1} \phi}_{L^1}} \\
&\le C \norm{\phi}_{L^1} \sum_{j = -\ensuremath{\infty}}^{-1} 2^{js}
+ C \norm{\phi}_{L^1}
\le C \norm{\phi}_{L^1}.
\end{align*} \end{proof} }
\begin{lemma}\label{L:BesovByFiniteDifferences}
Let $0 < \ensuremath{\alpha} < N$, where $N \in \ensuremath{\BB{N}}$. There exists a constant $C > 0$
such that
\begin{align*}
C^{-1} \norm{f}_{B^\ensuremath{\alpha}_{1, 1}}
\le \norm{f}_{L^1}
+ \int_0^1 r^{-\ensuremath{\alpha} - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\abs{h} \le r}
\abs{\Delta_h^N f(x)} \, dh \, dx \, dr
\le C \norm{f}_{B^\ensuremath{\alpha}_{1, 1}}
\end{align*}
for all $f \in B^\ensuremath{\alpha}_{1, 1}$. Here, $\Delta_h^N f$ is the
$N$-th finite difference of $f$, defined inductively for $h \in \ensuremath{\BB{R}}^d$,
$h \ne 0$, by
\begin{align*}
\Delta_h^1 f(x)
:= f(x + h) - f(x), \quad
\Delta_h^{n + 1} f
:= \Delta_h^1 \Delta_h^n f.
\end{align*} \end{lemma} \begin{proof}
This norm-equivalence is a special case of Theorem 1.116 (ii) of
\cite{TriebelFunctionSpacesIII}. \end{proof}
As long as $\ensuremath{\alpha} < 1$, \cref{L:BesovByFiniteDifferences} gives a $B^\ensuremath{\alpha}_{1, 1}$-equivalent norm that involves only the first difference of $f$. This works very well when $f$ is composed with a measure-preserving function like $X^{-1}$, for which we have some control on its modulus of continuity\xspace. We cannot control higher differences, however, which reduces the utility of \cref{L:BesovByFiniteDifferences}.
The first inequality in \cref{L:BesovByFiniteDifferences}, however, continues to hold when $N = 1$ for all $\ensuremath{\alpha} > 0$ (with an increased constant), as does the second inequality, somewhat weakened, as we show in \cref{L:BesovByFirstDifference}. This is the form that we apply in \cref{L:BesovXBound}.
\begin{lemma}\label{L:BesovByFirstDifference}
Let $\ensuremath{\alpha} > 0$. Then
\begin{align}\label{e:BesovAbove}
\norm{f}_{B^\ensuremath{\alpha}_{1, 1}}
\le \norm{f}_{L^1}
+ C 2^{\ensuremath{\alpha} + 1}
\int_0^1 r^{-\ensuremath{\alpha} - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\abs{h} \le r}
\abs{\Delta_h^N f(x)} \, dh \, dx \, dr
\end{align}
and
\begin{align}\label{e:BesovBelow}
\norm{f}_{L^1}
+ \int_0^1 r^{-\ensuremath{\alpha} - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\abs{h} \le r}
\abs{\Delta_h^N f(x)} \, dh \, dx \, dr
\le ???
\end{align}
for all $f \in B^\ensuremath{\alpha}_{1, 1}$. \end{lemma} \begin{proof}
Consider first $\ensuremath{\alpha} \in [1, 2)$. Then since
$\Delta^2_h f(x) = \Delta^1_h f(x + h) - \Delta^1_h f(x)$,
we have
\begin{align*}
\smallabs{\Delta^2_h f(x)}
\le \smallabs{\Delta^1_h f(x + h)} + \smallabs{\Delta^1_h f(x)}.
\end{align*}
It follows from \cref{L:BesovByFiniteDifferences} that
\begin{align*}
C^{-1} \norm{f}_{B^\ensuremath{\alpha}_{1, 1}}
&\le \norm{f}_{L^1}
+ \int_0^1 r^{-\ensuremath{\alpha} - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\smallabs{h} \le r}
\smallabs{\Delta_h^2 f(x)} \, dh \, dx \, dr \\
&\le \norm{f}_{L^1}
+ 2 \int_0^1 r^{-\ensuremath{\alpha} - 3} \int_{\ensuremath{\BB{R}}^2} \int_{\smallabs{h} \le r}
\smallabs{\Delta_h^1 f(x)} \, dh \, dx \, dr.
\end{align*}
The result for all $\ensuremath{\alpha}$ follows inductively
from $\Delta_h^{n + 1} f = \Delta_h^1 \Delta_h^n f$.
For the reverse inequality, we need to examine the proof of
\cref{L:BesovByFiniteDifferences}.
\ToDo{Here, we have a problem: \cite{TriebelFunctionSpacesIII} does not
give a proof, and the references don't do much good.
Instead, I am going to prove the equivalent analog
of Theorem 2.36 of \cite{BahouriCheminDanchin2011},
which does contain a nice proof. The references that follow
are to the inner workings of that proof, so you will need
to read \cite{BahouriCheminDanchin2011} at the same time
as this proof.}
What we want to show is that
\begin{align*}
\norm{\frac{\norm{\tau_{-y} u - u}_{L^1}}{\abs{y}^s}}
_{L^1(\ensuremath{\BB{R}}^2, \abs{y}^{-2} dy)}
\le ???
\end{align*}
where $s \ge 1$.
From the estimates on the proof of
Theorem 2.36 of \cite{BahouriCheminDanchin2011}, we have
\begin{align*}
\norm{\frac{\norm{\tau_{-y} u - u}_{L^1}}{\abs{y}^s}}
_{L^1(\ensuremath{\BB{R}}^2, \abs{y}^{-2} dy)}
\le C \norm{u}_{\dot{B}^s_{1, 1}}(I_1 + I_2),
\end{align*}
where
\begin{align*}
I_1
&:= \int_{\ensuremath{\BB{R}}^2} \sum_{j \le j_y} c_j 2^{j(1 - s)}
\abs{y}^{-s - 1} \, dy, \\
I_2
&:= \int_{\ensuremath{\BB{R}}^2} \sum_{j > j_y} c_j 2^{-js}
\abs{y}^{-s - 2} \, dy.
\end{align*}
Here, the constants, $(c_j)$, lie in the unit sphere in $\ell^1(\ensuremath{\BB{Z}})$; that is,
\begin{align*}
\sum_{j = -\ensuremath{\infty}}^\ensuremath{\infty} \abs{c_j} = 1.
\end{align*}
Also, $j_y \in \ensuremath{\BB{Z}}$ is chosen so that
\begin{align*}
\frac{1}{\abs{y}}
\le 2^{j_y}
< \frac{2}{\abs{y}}.
\end{align*}
Up to this point in the proof any $s \in \ensuremath{\BB{R}}$ would suffice.
The authors go on to show that \text{if} $s \in (0, 1)$ then
$I_1 \le C$, and then indicate that a strictly analogous argument gives
$I_2 \le C$. To piggyback on their work, therefore, we
let $\overline{s} := s - [s] \in (0, 1)$. \ToDo{Unless $s$ is an integer, in which
case we increase it somewhat---but worry about this detail later.}
Then we rewrite $I_1$ as
\begin{align*}
I_1
&:= \int_{\ensuremath{\BB{R}}^2} \sum_{j \le j_y} c_j 2^{j(1 - \overline{s})}
\abs{y}^{-\overline{s} - 1}
2^{-j(s - \overline{s})}
\abs{y}^{s - \overline{s}}
\, dy.
\end{align*}
Following the bounds in \cite{BahouriCheminDanchin2011} specialized to
$r = 1$, which simplifies the proof considerably, we have
\begin{align*}
I_1
&\le
\end{align*}
\end{proof}
\Ignore{ \begin{lemma}
Let $f \colon \ensuremath{\BB{R}}^2 \to \ensuremath{\BB{R}}$. For all $x, h \in \ensuremath{\BB{R}}^2$,
\begin{align*}
\Delta^2_h f(x)
= \Delta^1_h f(x + h) - \Delta^1_h f(x).
\end{align*} \end{lemma} \begin{proof}
We have,
\begin{align*}
\Delta^1_h f(x + &h) - \Delta^1_h f(x)
= f(x + 2h) - f(x + h) - (f(x + h) - f(x)) \\
&= f(x + 2h) - 2 f(x + h) + f(x)
= \Delta^2_h f(x).
\end{align*} \end{proof} }
\end{comment}
\section{\Holder space estimates}\label{S:LP}
In this section, we make use of the Littlewood-Paley operators $\Delta_j$, $j\geq -1$. A detailed definition of these operators and their properties can be found in chapter 2 of \cite{C1998}. We note here, only that $\Delta_j f = \varphi_j * f$, where $\varphi_j(\cdot) = 2^{2j} \varphi(2^j \cdot)$ for $j \ge 0$, $\varphi$ is a Schwartz function, and the Fourier transform of $\varphi$ is
supported in an annulus. We can write $\Delta_{-1} f$ as a convolution with a Schwartz function $\chi$ whose Fourier transform is supported in a ball.
We will also make use of the following Littlewood-Paley definition of Holder spaces. \begin{definition}\label{D:LPHolder} Let $r\in\ensuremath{\BB{R}}$. The space $C_*^r(\ensuremath{\BB{R}}^2)$ is defined to be the set of all tempered distributions on $\ensuremath{\BB{R}}^2$ for which \begin{equation*}
\| f \|_{C_*^r} := 2^{r j}
\sup_{j\geq -1} \|\Delta_j f \|_{L^{\infty}} <\infty. \end{equation*} \end{definition}
\begin{remark}\label{R:HolderWarning} It follows from Propositions 2.3.1, 2.3.2 of \cite{C1998} that the $C_*^r$ norm is equivalent to the classical \Holder space $C^r$ norm when $r$ is a positive non-integer: $\norm{f}_{C^r} \le a_r \norm{f}_{C_*^r}$, $\norm{f}_{C_*^r} \le b_r \norm{f}_{C^r}$ for constants, $a_r$, $b_r > 0$, though $a_r \to \ensuremath{\infty}$ as $r$ approaches an integer. See \cref{R:NegHolder}. \end{remark}
\Ignore{ \begin{lemma}\label{L:HolderEquivalence}
There exists a constant $C > 0$ such that for any positive non-integer
$r$, which we write as $r = k + \ensuremath{\alpha}$ for $k$ a integer, $\ensuremath{\alpha} \in (0, 1)$,
we have
\begin{align*}
C \ensuremath{\alpha} \norm{f}_{C^r}
\le \norm{f}_{C_*^r}
\le \frac{C^{r + 1}}{k!} \norm{f}_{C^r}.
\end{align*} \end{lemma} \begin{proof}
The follows from
Proposition2 2.3.1, 2.3.2 of \cite{C1998}. \end{proof}
\cref{L:HolderEquivalence} shows that the $C_*^r$ norm is equivalent to the classical Holder space $C^r$ norm when $r$ is a positive non-integer. As $r$ decreases to an integer, however, the $C^r$ norm may blow up while the $C_*^r$ norm remains finite, and controlling the $C^r_*$ norm gives less and less control over the $C^r$ norm. }
\begin{prop}\label{L:ForJ1Bound} Let $\zeta$ be a pre-growth bound\xspace and let $u \in L^{\infty}(0, T; S_1)$ with $X$ its associated flow map. Let $t \in [0, T]$ and set $\eta=\zeta\circ X^{-1}(t)$. For any $\alpha > 0$, $\ensuremath{\lambda}>0$, and $f \in L^\ensuremath{\infty}(\ensuremath{\BB{R}}^2)$, \begin{align}\label{e:TheJ1Bound}
\norm{\frac{(a_{\ensuremath{\lambda}}K) * f}{\eta}}_{L^\ensuremath{\infty}}
\le C(1 + \norm{f}_{L^\ensuremath{\infty}})
\Phi_\ensuremath{\alpha} \pr{t, \norm{\frac{f}{\eta}}_{C^{-\ensuremath{\alpha}}}}, \end{align}
where
$C = C(T, \zeta, \ensuremath{\lambda})$,
and $\Phi_\ensuremath{\alpha}$ is defined in
\cref{e:Phialpha} (using $C_0 = \norm{u}_{L^\ensuremath{\infty}(0, T; S_1)}$). \end{prop} \begin{proof} Define $g = 1/\eta$.
For fixed $N \ge -1$ (to be chosen later), write \begin{equation}\label{e:initial} \begin{split} &\abs{\frac{(a_{\ensuremath{\lambda}}K) * f(x)}{\eta(x)}}
= \left| g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y) \eta(x - y)(f/\eta)(x - y)\, dy \right|\\
&\qquad = \left | g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)\sum_{j\geq -1}(\Delta_j(f/\eta))(x - y)\, dy \right|\\
&\qquad \leq \sum_{-1\leq j\leq N} \left | g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)(\Delta_j(f/\eta))(x - y)\, dy \right| \\
&\qquad + \sum_{j > N} \left | g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)(\Delta_j(f/\eta))(x - y)\, dy \right| =: I + II. \end{split} \end{equation} We first estimate $I$. Exploiting \cref{D:LPHolder}, \begin{equation}\label{e:I} \begin{split}
&I = \sum_{-1\leq j\leq N} 2^{-j \alpha} 2^{j \alpha} \left | g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)(\Delta_j(f/\eta))(x - y)\, dy \right|\\
&\qquad \leq \sup_{j} 2^{-j\alpha} \| \Delta_j(f/\eta) \|_{L^{\infty}}\sum_{-1\leq j\leq N} 2^{j\alpha} g(x) \int_{\ensuremath{\BB{R}}^2} |a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)| \, dy\\
&\qquad \leq C2^{\ensuremath{\alpha} N} \| f/\eta \|_{C^{-\alpha}}g(x) \int_{\ensuremath{\BB{R}}^2} |a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)| \, dy. \end{split} \end{equation}
Since $\eta=\zeta\circ X^{-1}(t)$, where $\zeta$ is a pre-growth bound\xspace , \cref{L:zetaFlowBound} implies that \begin{equation*} \begin{split}
&g(x) \int_{\ensuremath{\BB{R}}^2} |a_{\ensuremath{\lambda}}(y) K(y)\eta(x - y)| \, dy
\le Cg(x) \int_{\ensuremath{\BB{R}}^2} |a_{\ensuremath{\lambda}}(y) K(y)\zeta(x - y)| \, dy\\
&\qquad \le C\int_{\ensuremath{\BB{R}}^2} | a_{\ensuremath{\lambda}}(y) K(y)g(x)(\zeta(x) + \zeta(y))| \, dy \\
&\qquad \leq C\int_{\ensuremath{\BB{R}}^2} | a_{\ensuremath{\lambda}}(y) K(y)|(1+ g(x)\zeta(y)) \, dy
\le C \int_0^{\ensuremath{\lambda}} (1 + g(x)\zeta(r)) \, dr \\
&\qquad
\le C\ensuremath{\lambda} \pr{1 + g(x)\zeta(\ensuremath{\lambda})}
\le C\ensuremath{\lambda}(1 + \zeta(\ensuremath{\lambda}))
= C(\ensuremath{\lambda}), \end{split} \end{equation*} where we used boundedness of $g$. Substituting this estimate into \cref{e:I}, we conclude that \begin{equation}\label{Ie:finalest}
I \leq C 2^{\ensuremath{\alpha} N} \| f/\eta \|_{C^{-\alpha}}. \end{equation}
We now estimate $II$ by introducing a commutator and utilizing the Holder continuity of $\eta$, writing \begin{equation}\label{e:II} \begin{split} II
&\leq \sum_{j > N} \left | g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y)\Delta_j f(x - y)\, dy \right| \\
&\qquad + \sum_{j > N} \left | g(x) \int_{\ensuremath{\BB{R}}^2} a_{\ensuremath{\lambda}}(y) K(y)[\Delta_j f(x - y) - \eta(x - y)(\Delta_j(f/\eta))(x - y)]\, dy \right| \\ &\qquad =: III + IV. \end{split} \end{equation}
We rewrite $III$ as a convolution, noting that the Littlewood-Paley operators commute with convolutions, and apply Bernstein's Lemma
and \cref{L:ALPBound} to give \begin{equation*} \begin{split}
III
&= \sum_{j > N} \abs{ g(x) ((a_{\ensuremath{\lambda}} K) * \Delta_j f(x))}
= g(x) \sum_{j > N} \abs{\Delta_j(a_{\ensuremath{\lambda}} K * f)(x)} \\
&\leq g(x) \sum_{j > N} 2^{-j} \norm{\nabla \Delta_j(a_{\ensuremath{\lambda}} K * f)}
_{L^{\infty}}
\le C g(x) \norm{f}_{L^\ensuremath{\infty}} \sum_{j > N} 2^{-j}
\le C g(x) 2^{-N} \norm{f}_{L^\ensuremath{\infty}}. \end{split} \end{equation*}
To estimate $IV$, we apply \cref{L:CommutatorEstimate} and (\ref{e:aKBound}) to obtain \begin{align*} \begin{split} IV
&\leq \sum_{j > N} g(x) \int_{\ensuremath{\BB{R}}^2} \left | a_{\ensuremath{\lambda}}(y) K(y) \right | \left | \Delta_j f(x - y) - \eta(x - y)(\Delta_j(f/\eta))(x - y) \right | \, dy \\ &\le \sum_{j > N} C \norm{f}_{L^\ensuremath{\infty}}
2^{-je^{-C_0t}} g(x) \int_{\ensuremath{\BB{R}}^2} \left | a_{\ensuremath{\lambda}}(y) K(y) \right | \, dy \leq C \norm{f}_{L^\ensuremath{\infty}} 2^{-Ne^{-C_0t}}
g(x) \ensuremath{\lambda}. \end{split} \end{align*}
Substituting the estimates for $III$ and $IV$ into \cref{e:II} gives \begin{equation*}
II
\le C (1 + \ensuremath{\lambda}) \norm{f}_{L^\ensuremath{\infty}} 2^{-Ne^{-C_0t}} g(x). \end{equation*} Finally, substituting the estimates for $I$ and $II$ into \cref{e:initial} yields \begin{align}\label{e:finalest}
\begin{split}
\abs{\frac{(a_{\ensuremath{\lambda}}K) * f(x)}{\eta(x)}}
&\le C \pr{1 + \norm{f}_{L^\ensuremath{\infty}}}
\pr{2^{\ensuremath{\alpha} N} \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}
+ 2^{-Ne^{-C_0t}} g(x)},
\end{split} \end{align} where $C$ depends on $\ensuremath{\lambda}$.
Now, we are free to choose the integer $N \ge -1$ any way we wish, but if we choose $N$ as close to \begin{align*}
N^*
&:= \frac{1}{\ensuremath{\alpha} + e^{-C_0t}}
\log_2 \pr{\frac{1}{\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}}}
= - \frac{1}{\ensuremath{\alpha} + e^{-C_0t}}
\log_2 \norm{f/\eta}_{C^{-\ensuremath{\alpha}}} \end{align*} as possible, we will be near the minimizer of the bound in \cref{e:finalest}, as long as $N^* \ge -1$ (because such an $N^*$ balances the two terms). Calculating with $N = N^*$ gives \begin{align*}
2^{\ensuremath{\alpha} N} \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}
&= 2^{\frac{\ensuremath{\alpha}}{\ensuremath{\alpha} + e^{-C_0t}}
\log_2 \pr{\frac{1}{\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}}}}
\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}
= \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}},
\\
2^{-Ne^{-C_0t}} g(x)
&= 2^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}
\log_2 \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}} g(x)
\le C\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}}, \end{align*} since $g$ is bounded. Rounding $N^*$ up or down to the nearest integer will only introduce a multiplicative constant no larger than $C 2^\ensuremath{\alpha}$, so this yields \begin{align}\label{e:alambdaKfzetaBound1}
\abs{\frac{(a_{\ensuremath{\lambda}}K) * f(x)}{\eta(x)}}
\le C \pr{1 + \norm{f}_{L^\ensuremath{\infty}}}
\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}} \end{align} as long an $N^* \ge -1$.
If $N^* < -1$, then it must be that \begin{align*}
\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}^{\frac{1}{\ensuremath{\alpha} + e^{-C_0t}}}
\ge 2 \end{align*} so \begin{align*}
\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}}
\ge 1
\ge Cg(x) \end{align*} for some constant $C > 0$. Then, from \cref{e:finalest}, \begin{align*}
\abs{\frac{(a_{\ensuremath{\lambda}}K) * f(x)}{\eta(x)}}
&\le C \pr{1 + \norm{f}_{L^\ensuremath{\infty}}}
\pr{2^{\ensuremath{\alpha} N} \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}
+ 2^{-Ne^{-C_0t}} \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}
^{\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}}}. \end{align*}
Choosing $N = -1$, yields \begin{align}\label{e:alambdaKfzetaBound2}
\begin{split}
\abs{\frac{(a_{h(x)}K) * f(x)}{\eta(x)}}
&\le C \pr{1 + \norm{f}_{L^\ensuremath{\infty}}}
\pr{\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}
+ \norm{f/\eta}_{C^{-\ensuremath{\alpha}}}^\frac{e^{-C_0t}}{\ensuremath{\alpha} + e^{-C_0t}}} \\
&= C \pr{1 + \norm{f}_{L^\ensuremath{\infty}}}
\Phi_\ensuremath{\alpha} \pr{t, \norm{\frac{f}{\eta}}_{C^{-\ensuremath{\alpha}}}}.
\end{split} \end{align} Adding the bounds in \cref{e:alambdaKfzetaBound1,e:alambdaKfzetaBound2}, we conclude \cref{e:TheJ1Bound} holds for all values of $\norm{f/\eta}_{C^{-\ensuremath{\alpha}}}$. \end{proof}
\begin{comment} First, we reformulate the Paley-Littlewood term in the form, \begin{align}\label{e:PLReformulated}
\begin{split}
&\norm{\ensuremath{\nabla} \Delta_j ((a_{h(x)} K) * f)}_{L^\ensuremath{\infty}}
= \norm{\ensuremath{\nabla} \phi_j * ((a_{h(x)} K) * f)}_{L^\ensuremath{\infty}} \\
&\qquad
\le \norm{\ensuremath{\nabla} \phi_j}_{L^1} \norm{(a_{h(x)} K) * f}_{L^\ensuremath{\infty}}
\le \norm{\ensuremath{\nabla} \phi_j}_{L^1} \norm{a_{h(x)} K}_{L^1} \norm{f}_{L^\ensuremath{\infty}} \\
&\le C h(x) \norm{\ensuremath{\nabla} \phi_j}_{L^1} \norm{f}_{L^\ensuremath{\infty}}.
\end{split} \end{align} Here, $\phi_j$ for $j \ge 0$ is as in our \cite{CK2016} and $\phi_{-1} = \chi$ of our \cite{CK2016}. But by scaling, \begin{align*}
\norm{\ensuremath{\nabla} \phi_j}_{L^1}
&= 2^{2j} \int_{\ensuremath{\BB{R}}^2} 2^j \ensuremath{\nabla} \phi(2^j y) \, dy
= 2^j \int_{\ensuremath{\BB{R}}^2} \ensuremath{\nabla} \phi(z) \, dz
= 2^j \norm{\ensuremath{\nabla} \phi}_{L^1}
= C 2^j \end{align*} for $j \ge 0$, and of course also $\norm{\ensuremath{\nabla} \phi_{-1}}_{L^1} = C 2^{-1}$. We conclude that \begin{align*}
\norm{\ensuremath{\nabla} \Delta_j ((a_{h(x)} K) * f)}_{L^\ensuremath{\infty}}
\le C 2^j h(x) \norm{f}_{L^\ensuremath{\infty}}. \end{align*} The factor of $2^j$ is just the right size to kill us. It is hard to see how to avoid it, though. \end{comment}
\begin{lemma}\label{L:zetaFlowBound}
Let $\zeta$ be a pre-growth bound\xspace
and let $u \in L^\ensuremath{\infty}(0, T; S_1)$ with $X$ its associated
flow map. Then for all $t \in [0, T]$, $x \in \ensuremath{\BB{R}}^2$,
\begin{align}\label{e:zetaFlowBoundEq}
C(T) \zeta(X^{-1}(t, x))
\le \zeta(x)
\le C'(T) \zeta(X^{-1}(t, x))
\end{align}
for constants, $0 < C(T) < C'(T)$. \end{lemma} \begin{proof}
We rewrite \cref{e:zetaFlowBoundEq} as
\begin{align*}
C(T) \zeta(x)
\le \zeta(X(t, x))
\le C'(T) \zeta(x)
\end{align*}
for all $x \in \ensuremath{\BB{R}}^2$, a bound that we see easily follows from
\cref{L:preGB,L:FlowBounds}. \end{proof}
\begin{lemma}\label{L:ALPBound}
For all $j \ge -1$ and all $\ensuremath{\lambda} > 0$,
\begin{align*}
\norm{\ensuremath{\nabla} \Delta_j ((a_\ensuremath{\lambda} K) * f)}_{L^\ensuremath{\infty}}
\le C \norm{f}_{L^\ensuremath{\infty}}.
\end{align*}
\begin{comment}
Also for all $j \ge -1$ and all $n \in \ensuremath{\BB{N}}$ there exists
and absolute constant $C_n > 0$ such that
\begin{align*}
\norm{\ensuremath{\nabla} \Delta_j ((a_\ensuremath{\lambda} K) * f)}_{L^\ensuremath{\infty}}
\le C \norm{\Delta_j f}_{L^\ensuremath{\infty}} + C_n 2^{-jn} \norm{f}_{L^\ensuremath{\infty}}.
\end{align*}
\end{comment} \end{lemma} \begin{proof}
Write
\begin{align*}
(a_\ensuremath{\lambda} K) * f(z)
= \pr{K * (a_\ensuremath{\lambda}(\cdot - z) f)}(z).
\end{align*}
For fixed $z$, define the divergence-free vector field,
\begin{align*}
b_z(w)
:= \pr{K * (a_\ensuremath{\lambda}(\cdot - z) f)}(w).
\end{align*}
We know (for instance, from Lemma 4.2 of \cite{CK2006}) that
$\norm{\ensuremath{\nabla} \Delta_j b_z}_{L^\ensuremath{\infty}}
\le C \norm{\Delta_j \curl b_z}_{L^\ensuremath{\infty}}$. Thus,
\begin{align*}
\norm{\ensuremath{\nabla} \Delta_j ((a_\ensuremath{\lambda} K) * f)}_{L^\ensuremath{\infty}}
&= \norm{\ensuremath{\nabla} \Delta_j b_z}_{L^\ensuremath{\infty}}
\le C \norm{\Delta_j \curl b_z}_{L^\ensuremath{\infty}}
= C \norm{\Delta_j(a_\ensuremath{\lambda}(\cdot - z) f)}_{L^\ensuremath{\infty}} \\
&\le C \norm{a_\ensuremath{\lambda}(\cdot - z) f}_{L^\ensuremath{\infty}}
\le C \norm{f}_{L^\ensuremath{\infty}}.
\end{align*} \end{proof}
\begin{lemma}\label{L:CommutatorEstimate}
With the assumptions as in \cref{L:ForJ1Bound}, for all
$x, y \in \ensuremath{\BB{R}}^2$, we have
\begin{align*}
&\abs{\Delta_j f(x - y)
- \eta(x - y) (\Delta_j (f/\eta))(x - y)}
\le C \norm{f}_{L^\ensuremath{\infty}}
\norm{\ensuremath{\nabla} \zeta}_{L^\ensuremath{\infty}} 2^{-je^{-C_0t}}.
\end{align*} \end{lemma} \begin{proof} First observe that for any $x$, $y \in \ensuremath{\BB{R}}^2$ \begin{equation*} \begin{split} &\abs{\eta(x) - \eta(y)} = \frac{\abs{\zeta(X_1^{-1}(t,x)) - \zeta(X_1^{-1}(t,y))}}{\abs{X_1^{-1}(t,x) - X_1^{-1}(t,y)}} \abs{X_1^{-1}(t,x) - X_1^{-1}(t,y)}
\leq \|\nabla \zeta \|_{L^{\infty}} \chi_t(\abs{x-y}) \end{split} \end{equation*} by \cref{L:FlowUpperSpatialMOC}.
We then have,
\begin{align*}
&\abs{\Delta_j f(x - y)
- \eta(x - y) (\Delta_j (f/\eta))(x - y)} \\
&\qquad
= \abs{\int_{\ensuremath{\BB{R}}^2}
\pr{\varphi_j(z) (\eta f/\eta)(x - y - z)
- \eta(x - y) \varphi_j(z) (f/\eta)(x - y - z)} \, dz} \\
&\qquad
\le \int_{\ensuremath{\BB{R}}^2}
\abs{\varphi_j(z) (f/\eta)(x - y - z)}
\abs{\eta(x - y - z) - \eta(x - y)} \, dz \\
&\qquad
\le \norm{\ensuremath{\nabla} \zeta}_{L^\ensuremath{\infty}}
\int_{\ensuremath{\BB{R}}^2}
\chi_t(\abs{z}) \abs{\varphi_j(z)} \abs{(f/\eta)(x - y - z)} \, dz
\le C \norm{\ensuremath{\nabla} \zeta}_{L^\ensuremath{\infty}} \norm{f}_{L^\ensuremath{\infty}}2^{-je^{-C_0t}}.
\end{align*}
We used here that, because $\varphi$ is Schwartz-class,
by virtue of \cref{e:chitBound}, we have
\begin{align*}
\int_{\ensuremath{\BB{R}}^2} &\abs{\varphi_j(z)} \chi_t(\abs{z})\, dz
= 2^{2j} \int_{\ensuremath{\BB{R}}^2} \abs{\varphi(2^j z)} \chi_t(\abs{z}) \, dz
= \int_{\ensuremath{\BB{R}}^2} \abs{\varphi(w)} \chi_t(2^{-j} \abs{w}) \, dw \\
&\le 2^{-je^{-C_0t}}
\int_{\ensuremath{\BB{R}}^2} \abs{\varphi(w)} \chi_t(\abs{w}) \, dw
\le C 2^{-je^{-C_0t}}.
\end{align*} \end{proof}
\Ignore{
\cref{P:f0XBound} extends to log-Lipschitz vector fields similar bounds for Lipschitz vector fields, which have been obtained in great generality for Besov space norms of various types---see Section 3.2 of \cite{BahouriCheminDanchin2011}, for instance. (The bounds for Lipschitz vector fields apply for all time and have no loss of regularity, just an increase in norms.) The version we present here is the log-Lipschitz analog of the special case used by Chemin \cite{C1991,Chemin1993Persistance} in proving the propagation of the regularity of a vortex patch boundary: in our notation, $\norm{f \circ X^{-1}(t)}_{C^{-\ensuremath{\alpha}}} \le C e^{C' t} \norm{f}_{C^{-\ensuremath{\alpha}}}$ for $\ensuremath{\alpha} \in (0, 1)$, where $C'$ is proportional to the Lipschitz constant of the underlying vector field. (A relatively simple proof of this bound appears in Proposition A.3 of \cite{BK2015}.) Observe that, in light of \cref{R:aT}, it is important for us to obtain the dependence of the constant on $\delta$ and $\ensuremath{\alpha}$. }
The following proposition is a simplified version of Theorem 3.28 of \cite{BahouriCheminDanchin2011}. \begin{prop}\label{P:f0XBound} Let $u\in L^{\infty}(0,T; S_1)$. Assume $f\in C([0,T];L^{\infty})$ solves the transport equation \begin{equation*} \begin{split} &\partial_t f + u\cdot\nabla f = 0\\
&f|_{t=0} = f^0. \end{split} \end{equation*} For fixed $\delta\in(-1,0)$, there exists a constant $C=C(\delta)$ such that for any $\beta>C$, if \begin{equation}\label{2}
\delta_t = \delta - \beta\int_0^t \| u(s) \|_{LL} \, ds, \end{equation} and if $T^*$ satisfies $\delta_{T^*} \geq -1$, then \begin{equation*}
\sup_{t\in[0,T^*]} \| f(t) \|_{C^{\delta_t}} \leq \frac{\beta}{\beta-C} \| f_0 \|_{C^{\delta}}. \end{equation*} \end{prop}
\begin{remark}
Proposition \ref{P:f0XBound} is proved in greater generality in \cite{BahouriCheminDanchin2011}. The authors assume, for example, that $f$ belongs to the appropriate negative \Holder space. Here, we apply Proposition \ref{P:f0XBound} with $f=\frac{\bar{\omega}^0}{\zeta}\circ X_1^{-1}$, which clearly belongs to $C([0,T];L^{\infty})$ and therefore to all negative \Holder spaces. In addition, the authors integrate a quantity in (\ref{2}) which differs from $\| u \|_{LL}$ but is bounded above and below by $C\| u \|_{LL}$ for a constant $C>0$. \end{remark}
\Ignore{ \begin{prop}\label{P:f0XBound}
Fix $\delta \in (0, 1)$.
Let $X$ be the flow map associated to the velocity field
$u \in L^\ensuremath{\infty}(0, T; S_1)$.
Then
\begin{align*}
\norm{f \circ X^{-1}(t)}_{C^{-\ensuremath{\alpha}}}
\le \frac{C(T)}{\delta (1 - \ensuremath{\alpha})} \norm{f}_{C^{-\delta}}
\end{align*}
for all $t \in [0, T^*]$, where $\ensuremath{\alpha}$ satisfies
$1>\ensuremath{\alpha} > \delta e^{C_0t} + 2 (e^{C_0t} - 1)$
and $T^*$ is defined in \cref{e:TStar}---using
$C_0 = \norm{u}_{L^\ensuremath{\infty}(0, T; S_1)}$. \end{prop} \begin{proof} Writing $Y = X^{-1}(t)$, we have \begin{align*}
&\| f \circ X^{-1}(t) \|_{C^{-\ensuremath{\alpha}}}
= \sup_{j\geq -1} 2^{-j\ensuremath{\alpha}} \| \Delta_j (f \circ Y) \|_{L^{\infty}}
= \sup_{j\geq -1} 2^{-j\ensuremath{\alpha}} \mednorm{ \Delta_j
\smallpr{\smallpr{\sum_{l\geq -1} \Delta_l f}
\circ Y}}_{L^{\infty}} \\
&\qquad
\leq \sup_{j\geq -1} 2^{-j\ensuremath{\alpha}}\sum_{l\geq -1}
\| \Delta_j ((\Delta_l f) \circ Y) \|_{L^{\infty}} \\
&\qquad
= \sup_{j\geq -1} 2^{-j\ensuremath{\alpha}} \left( \sum_{l\leq j} + \sum_{l> j}\right)
\| \Delta_j ((\Delta_l f) \circ Y) \|_{L^{\infty}}
=: I + II. \end{align*} We will first show that $I \leq C \norm{f}_{C^{-\delta}}$. Note that if $j=-1$, then \begin{align}\label{Ilowfreq}
2^{-j \ensuremath{\alpha}}
\sum_{l\leq j}\| \Delta_j ((\Delta_l f) \circ Y) \|_{L^{\infty}}
&= 2^\ensuremath{\alpha} \| \Delta_{-1} ((\Delta_{-1} f) \circ Y) \|_{L^{\infty}}
\le C \norm{(\Delta_{-1} f) \circ Y}_{L^\ensuremath{\infty}} \\
&= \|\Delta_{-1} f \|_{L^{\infty}}
\leq C \norm{f}_{C^{-\delta}}. \end{align}
Now assume that $j \geq 0$.
Then $
\int_{\ensuremath{\BB{R}}^2} \varphi (2^j(x - y)) \, dy = 0 $, because $\widehat{\varphi}(0) = 0$. It follows that \begin{align*}
&\abs{\Delta_j ((\Delta_l f) \circ Y)(x)}
= \abs{2^{2j} \int_{\ensuremath{\BB{R}}^2} \varphi (2^j(x - y))
(\Delta_l f)(Y(y)) \, dy}\\
&\qquad
= \abs{2^{2j} \int_{\ensuremath{\BB{R}}^2} \varphi (2^j(x - y))
((\Delta_l f)(Y(y)) - (\Delta_l f)(Y(x))) \, dy} \\
&\qquad
\leq 2^{2j} \int_{\ensuremath{\BB{R}}^2} \abs{\varphi (2^j(x - y))}
\| \Delta_l\nabla f\|_{L^{\infty}}
\abs{Y(y) - Y(x)} \, dy \\
&\qquad
\leq 2^{2j} 2^l \int_{\ensuremath{\BB{R}}^2} \abs{\varphi (2^j(x - y))}
\norm{\Delta_l f}_{L^{\ensuremath{\infty}}} \chi_t(\abs{x - y})
\, dy. \end{align*} Here, we applied Bernstein's Lemma and \cref{L:FlowUpperSpatialMOC} to get the second inequality.
We now make the change of variables, $z = 2^j y$, which leads to \begin{align*}
&\norm{\Delta_j ((\Delta_l f) \circ Y)}_{L^\ensuremath{\infty}}
\le \norm{\Delta_l f}_{L^{\ensuremath{\infty}}} 2^l
\sup_{x \in \ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \abs{\varphi (2^j x - z)}
\chi_t(\abs{x - 2^{-j} z}) \, dz
\\
&\qquad
= \norm{\Delta_l f}_{L^{\ensuremath{\infty}}} 2^l
\sup_{x' \in \ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \abs{\varphi (x' - z)}
\chi_t(\abs{2^{-j} x' - 2^{-j} z}) \, dz
\\ &\qquad
\leq 2^l 2^{-je^{-C_0t}}\norm{\Delta_l f}_{L^{\ensuremath{\infty}}}
\sup_{x' \in \ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \abs{\varphi (x' - z)}
\chi_t(\abs{x' - z}) \, dz \\
&\qquad
= 2^l 2^{-je^{-C_0t}} \norm{\Delta_l f}_{L^{\ensuremath{\infty}}}
\int_{\ensuremath{\BB{R}}^2} \abs{\varphi(y)}
\chi_t(\abs{y}) \, dy
\leq C 2^l 2^{-je^{-C_0t}}\norm{\Delta_l f}_{L^{\ensuremath{\infty}}}. \end{align*} We used \cref{e:chitBound} and also that $\varphi$ is a Schwartz function. We conclude that \begin{align}\label{e:KeyBoundForI}
\begin{split}
2^{-j\ensuremath{\alpha}} \abs{\Delta_j ((\Delta_l f) \circ Y)(x)}
&\le C 2^{-j (e^{-C_0t} + \ensuremath{\alpha})} 2^l
\norm{\Delta_l f}_{L^{\ensuremath{\infty}}} \\
&\le C 2^{-j (e^{-C_0t} + \ensuremath{\alpha})} 2^{l(1 + \delta)}
\norm{f}_{C_{-\delta}}.
\end{split} \end{align}
It follows from \cref{e:KeyBoundForI} and (\ref{Ilowfreq}) that \begin{align*}
&I
\le
C \norm{f}_{C^{-\delta}}\left( 1+
\sup_{j \ge 0} \sum_{l \le j}
2^{-j(e^{-C_0 t} + \ensuremath{\alpha})} 2^{l (1 + \delta)} \right)
\le
C \norm{f}_{C^{-\delta}}\left( 1+
\sup_{j \ge 0}
2^{-j(e^{-C_0 t} + \ensuremath{\alpha}) + j (1 + \delta)} \right)\\
&\qquad =C \norm{f}_{C^{-\delta}}\left( 1+
\sup_{j \ge 0}
2^{-j(e^{-C_0 t} + \ensuremath{\alpha} - 1 - \delta)} \right)\leq C \norm{f}_{C^{-\delta}}. \end{align*} To get the last inequality above, we used that, since $\ensuremath{\alpha} > \delta e^{C_0t} + 2 (e^{C_0t} - 1)$, \begin{equation*} \begin{split} \ensuremath{\alpha} > \delta e^{C_0t} + (e^{C_0t} - 1) = e^{C_0t} ( \delta + 1 - e^{-C_0t}) \geq \delta + 1 - e^{-C_0t} \end{split} \end{equation*} for all $t\geq 0$. We conclude that \begin{align*}
I
\le C
\norm{f}_{C^{-\delta}}. \end{align*}
The inequality in \cref{e:KeyBoundForI} was useful for bounding the low frequencies but we need to establish a different bound for the high frequencies. We start by fixing $\sigma\in(0,1)$ and defining \begin{equation*} \Psi_x(y,z) = \frac{\abs{ \varphi ( x - y ) - \varphi ( x - z ) }}{\abs{y - z}^{2 + \sigma}}. \end{equation*} We will again consider the cases $j=-1$ and $j\geq 0$ separately. We first assume $j\geq 0$. We then have, \begin{align}\label{e:wBoundForII}
\begin{split}
&\Delta_j ((\Delta_l f) \circ Y) (x)
= 2^{2j} \int_{\ensuremath{\BB{R}}^2} \varphi ( 2^j (x - y) )
\Delta_l f (Y(y)) \, dy \\
&\qquad= \sum_{\abs{k - l} \le 3}
2^{2j} \int_{\ensuremath{\BB{R}}^2} \varphi ( 2^j (x - y) )
\Delta_l \Delta_k f (Y(y)) \, dy.
\end{split} \end{align} Letting $g_k = \Delta_k f$, we bound the expression in the sum above. We have, for fixed $j\geq 0$ and $l>j$, \begin{align*} \begin{split} &\abs{2^{2j} \int_{\ensuremath{\BB{R}}^2} \varphi ( 2^j (x - y) ) \Delta_l g_k (Y(y)) \, dy}
= 2^{2j} \abs{\int_{\ensuremath{\BB{R}}^2} \varphi ( 2^j (x-X_t(y)) ) \Delta_l g_k (y) \, dy} \\ & = 2^{2j} \abs{\int_{\ensuremath{\BB{R}}^2}\int_{\ensuremath{\BB{R}}^2} \varphi ( 2^j (x-X_t(y)) ) 2^{2l}\varphi (2^l(y-z))g_k (z) \, dz \, dy} \\ & = 2^{2j} \abs{\int_{\ensuremath{\BB{R}}^2}\int_{\ensuremath{\BB{R}}^2} (\varphi (2^j( x - X_t(y) )) - \varphi ( 2^j (x-X_t(z)) ) ) 2^{2l}\varphi (2^l(y-z))g_k (z) \, dz \, dy} \\
& \leq 2^{2j} 2^{j(2 + \sigma)}\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \Psi_{2^jx}(2^jX_t(y),2^jX_t(z)) \abs{X_t(y)-X_t(z)}^{2 + \sigma} \abs{2^{2l}\varphi (2^l(y-z))g_k (z)}\, dz \, dy\\ &= C 2^{2(j-l)}2^{j(2 + \sigma)} \int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \Psi_{2^jx}(2^jX_t(2^{-l}y),2^jX_t(2^{-l}z)) \abs{X_t(2^{-l}y)-X_t(2^{-l}z)}^{2 + \sigma} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \abs{\varphi (y-z)g_k (2^{-l}z)} \, dz \, dy. \\ \end{split} \end{align*} We used here that $X_t := X(t, \cdot)$ is the measure-preserving inverse of $Y$, and also used that \begin{equation}\label{e:lgeq0} \begin{split} &\int_{\ensuremath{\BB{R}}^2} \varphi ( 2^j (x-X_t(z)) )2^{2l}\varphi (2^l(y-z))g_k (z) \, dy\\ &\qquad = \varphi ( 2^j (x-X_t(z)) )g_k (z) \int_{\ensuremath{\BB{R}}^2} 2^{2l}\varphi (2^l(y-z)) \, dy = 0, \end{split} \end{equation} keeping in mind that $l>0$.
Now, by \cref{L:FlowUpperSpatialMOC,e:chitBound}, \begin{align*}
\abs{X_t(2^{-l} y)-X_t(2^{-l} z)}^{2 + \sigma}
\le C \chi_t(2^{-l} \abs{y - z})^{2 + \sigma}
\le C 2^{-l(2 + \sigma) e^{-C_0 t}} \chi_t(\abs{y - z})^{2 + \sigma}. \end{align*} Hence, \begin{align*}
&\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\Psi_{2^j x}(2^j X_t(2^{-l} y), 2^j X_t(2^{-l} z))
\abs{X_t(2^{-l} y)-X_t(2^{-l} z)}^{2 + \sigma}
\medabs{\varphi (y - z) g_k (2^{-l} z)}
\, dz \, dy \\
&\qquad
\le C
2^{-l({2 + \sigma})e^{-C_0t}} \norm{g_k}_{L^\ensuremath{\infty}}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\Psi_{2^jx}(2^jX_t(2^{-l}y),2^jX_t(2^{-l}z))
\abs{\varphi(y-z)}
\chi_t(\abs{y - z})^{2 + \sigma}
\, dz \, dy \\
&\qquad
\le C 2^{-l({2 + \sigma})e^{-C_0t}}
\norm{g_k}_{L^\ensuremath{\infty}}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\Psi_{2^jx}(2^jX_t(2^{-l}y),2^jX_t(2^{-l}z))
\, dz \, dy \\
&\qquad
\le C 2^{4l -l({2 + \sigma})e^{-C_0t}}
\norm{g_k}_{L^\ensuremath{\infty}}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\Psi_{2^jx}(2^jX_t(y),2^jX_t(z))
\, dz \, dy. \end{align*} We used that $\varphi$ is a Schwartz function so $\abs{\varphi(y-z)} \chi_t(\abs{y - z})^{2 + \sigma} \le C$. Hence, \begin{align*}
&\abs{2^{2j}
\int_{\ensuremath{\BB{R}}^2}
\varphi (2^j (x - y) ) \Delta_l g_k (Y(y)) \, dy} \\
&\qquad
\leq C 2^{2(j + l)}2^{j(2 + \sigma)} 2^{-l({2 + \sigma}) e^{-C_0t}}
\norm{g_k}_{L^\ensuremath{\infty}}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\Psi_{2^jx}(2^jX_t(y),2^jX_t(z))
\, dz \, dy. \end{align*} To estimate this last integral, we change variables again (using $X_t$ measure-preserving) to get \begin{equation*} \begin{split} &\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \Psi_{2^jx}(2^jX_t(y),2^jX_t(z)) \, dz \, dy = \int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \Psi_{2^jx}(2^jy,2^jz) \, dz \, dy\\ &\qquad = 2^{-4j} \int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2} \frac{\abs{ \varphi (y) - \varphi ( z ) }}{\abs{y - z}^{2 + \sigma}} \, dz \, dy \le C_\sigma2^{-4j}, \end{split} \end{equation*} by \cref{L:DiffvarphiBound}, where $C_\sigma := C(\sigma(1 - \sigma))^{-1}$. Hence, recalling that $g_k = \Delta_k f$, \begin{align*}
&\abs{2^{2j}
\int_{\ensuremath{\BB{R}}^2}
\varphi (2^j (x - y) ) \Delta_l g_k (Y(y)) \, dy}
\le C_\sigma 2^{2(l - j)}2^{j(2 + \sigma)} 2^{-l({2 + \sigma}) e^{-C_0t}}
\norm{\Delta_k f}_{L^\ensuremath{\infty}} \\
&\qquad
= C_\sigma 2^{\sigma j + (2 - (2 + \sigma) e^{-C_0 t}) l}
\norm{\Delta_k f}_{L^\ensuremath{\infty}}, \end{align*} since $2(l - j) + j(2 + \sigma) - l({2 + \sigma}) e^{-C_0t} = 2l + \sigma j - 2l e^{-C_0t} - l \sigma e^{-C_0t} = \sigma j + (2 - (2 + \sigma) e^{-C_0 t}) l$.
It follows from \cref{e:wBoundForII} that for $j\geq 0$, \begin{equation*} \begin{split}
&\abs{ \Delta_j ((\Delta_l f) \circ Y) (x)}
\le C_\sigma 2^{\sigma j + (2 - (2 + \sigma) e^{-C_0 t}) l}
\sum_{\abs{k - l} \le 3} \norm{\Delta_k f}_{L^\ensuremath{\infty}}\\
&\qquad \le C_\sigma 2^{\sigma j + (2 - (2 + \sigma) e^{-C_0 t}) l}2^{\delta l}\| f \|_{C^{-\delta}},
\end{split} \end{equation*} where we used the series of estimates \begin{align*}
\sum_{\abs{k - l} \le 3} \norm{\Delta_k f}_{L^\ensuremath{\infty}}
\le \sum_{\abs{k - l} \le 3} 2^{-\delta k}2^{\delta k}\norm{\Delta_k f}_{L^\ensuremath{\infty}} \leq C2^{\delta l}\| f \|_{C^{-\delta}}. \end{align*} For $j=-1$, we apply an argument identical to that above, except that the operator $\Delta_{j}$ now represents convolution with $2^{-2}\chi(2^{-1}\cdot)$, rather than $2^{2j}\varphi(2^{j}\cdot)$ when $j\geq 0$. Note that, when $j=-1$, $l>j = -1$, so that \cref{e:lgeq0} still holds.
Taking the sum over $l>j$ and the supremum over all $j\geq -1$ gives \begin{align*}
II
&\le C_\sigma \sup_{j\ge -1} \sum_{l > j}
2^{-\ensuremath{\alpha} j}
2^{\sigma j + (2 - (2 + \sigma) e^{-C_0 t}) l}
2^{\delta l} \norm{f}_{C^{-\delta}} \\
&=
C_\sigma \norm{f}_{C^{-\delta}}
\sup_{j \ge -1} \sum_{l > j}
2^{-(\ensuremath{\alpha} - \sigma) j
+ (2 + \delta - (2 + \sigma) e^{-C_0 t}) l}. \end{align*} Choosing $\sigma = \ensuremath{\alpha}$ and noting that $2 + \delta - (2 + \ensuremath{\alpha}) e^{-C_0 t} < 0$ by assumption, we conclude that \begin{equation*} II \leq C_\sigma \norm{f}_{C^{-\delta}}, \end{equation*} which we note holds up to $t = T^*$.
Combining the bounds on $I$ and $II$, and using that $C_\sigma \le C (\delta(1 - \ensuremath{\alpha}))^{-1}$ since $\ensuremath{\alpha} \ge \delta$, gives the result. \end{proof}
}
\Ignore{ \cref{L:DiffvarphiBound} gives the bound on the function $\Psi_x$ that we used in the proof of \cref{P:f0XBound}. It is classical that on the right-hand side of this bound we can use $C(\sigma) \norm{\ensuremath{\nabla} \varphi}_{L^1}$. We need, however, explicit control on how the constant may blow up with $\sigma$. Fortunately, it is easy to do this explicitly for Schwartz-class functions.
\begin{lemma}\label{L:DiffvarphiBound}
For any Schwartz-class function, $\phi$, there exists
a constant $C > 0$ such that
for all $\sigma \in (0, 1)$,
\begin{equation*}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\frac{\abs{ \phi(y) - \phi(z)}}
{\abs{y - z}^{2 + \sigma}} \, dz \, dy
\le \frac{C}{\sigma(1 - \sigma)}.
\end{equation*} \end{lemma} \begin{proof}
If $\abs{z - y} \le 1$ then
$\abs{\phi(y) - \phi(z)} \le \norm{\ensuremath{\nabla} \phi}_{L^\ensuremath{\infty}(B_1(y))}\abs{y-z}$.
Hence,
\begin{align*}
\int_{\ensuremath{\BB{R}}^2} &\int_{\ensuremath{\BB{R}}^2}
\frac{\abs{ \phi(y) - \phi(z)}}
{\abs{z - y}^{2 + \sigma}} \, dz \, dy \\
&\le
\int_{\ensuremath{\BB{R}}^2} \int_{\abs{z - y} \le 1}
\frac{\norm{\ensuremath{\nabla} \phi}_{L^\ensuremath{\infty}(B_1(y))}}
{\abs{z - y}^{1 + \sigma}} \, dz \, dy
+ \int_{\ensuremath{\BB{R}}^2} \int_{\abs{z - y} > 1}
\frac{\abs{\phi(y)} + \abs{\phi(z)}}
{\abs{z - y}^{2 + \sigma}} \, dz \, dy
=: I + II.
\end{align*}
But,
\begin{align*}
I
&= \frac{2 \pi}{1 - \sigma}
\int_{\ensuremath{\BB{R}}^2} \norm{\ensuremath{\nabla} \phi}_{L^\ensuremath{\infty}(B_1(y))} \, dy
= \frac{C}{1 - \sigma},
\end{align*}
since $\phi$ is Schwartz-class, so $\norm{\ensuremath{\nabla} \phi}_{L^\ensuremath{\infty}(B_1(y))}$
decays in $y$ faster than any rational function, and
\begin{align*}
II
\le \int_{\ensuremath{\BB{R}}^2} \abs{\phi(y)} \int_{\abs{z - y} > 1}
\frac{1}{\abs{z - y}^{2 + \sigma}} \, dz \, dy
+ \int_{\ensuremath{\BB{R}}^2} \abs{\phi(z)} \int_{\abs{y - z} > 1}
\frac{1}{\abs{z - y}^{2 + \sigma}} \, dy \, dz
\le \frac{4 \pi}{\sigma} \norm{\phi}_{L^1}.
\end{align*} \end{proof} }
\begin{lemma}\label{L:HolderInterpolation}
For any $u \in S_1(\ensuremath{\BB{R}}^2)$ and $r \in (0, 1)$,
\begin{align*}
\norm{u}_{C^r}
\le
C \pr{\norm{u}_{L^\ensuremath{\infty}}
+
\norm{u}_{L^\ensuremath{\infty}}^{1 - r}
\norm{u}_{S_1}^{r}}.
\end{align*} \end{lemma} \begin{proof} We apply \cref{D:LPHolder} and write \begin{align*}
\norm{u}_{C_r}
&= \sup_{j\geq -1} 2^{jr} \norm{\Delta_j u}_{L^\ensuremath{\infty}} \\
&\le C\norm{u}_{L^\ensuremath{\infty}} + \sup_{j\geq 0}
2^{jr} \norm{\Delta_j u}^{1-r}_{L^\ensuremath{\infty}}
\norm{\Delta_j u}^{r}_{L^\ensuremath{\infty}}\\
&\le C\norm{u}_{L^\ensuremath{\infty}}
+ C \norm{ u}^{1-r}_{L^\ensuremath{\infty}} \sup_{j\geq 0}
2^{jr} 2^{-jr}\norm{\Delta_j \ensuremath{\nabla} u}^{r}_{L^\ensuremath{\infty}}, \end{align*} where we used Bernstein's Lemma to get the second inequality. But, using Lemma 4.2 of \cite{CK2006}, \begin{align*}
\norm{\Delta_j \ensuremath{\nabla} u}_{L^\ensuremath{\infty}}
\le C \norm{\Delta_j \curl u}_{L^\ensuremath{\infty}}
\le C \norm{\curl u}_{L^\ensuremath{\infty}}
\le C \norm{u}_{S^1}, \end{align*} which yields the result. \end{proof}
\Ignore{ \begin{lemma}\label{L:DiffSobolevBound}
Let $\sigma \in (0, 1)$. There exists a constant, $C(\sigma) > 0$,
such that for all $\varphi \in W^{\sigma, 1}$,
\begin{equation}\label{e:DiffSobolevBound}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\frac{\abs{ \varphi(y) - \varphi(z)}}
{\abs{y - z}^{2 + \sigma}} \, dz \, dy
\le C(\sigma)
\norm{\varphi}_{W^{\sigma,1}}.
\end{equation}
Moreover, $\sigma \mapsto C(\sigma)$ is continuous,
though $C(\sigma) \to \ensuremath{\infty}$ as $\sigma \to 1$. \end{lemma} \begin{proof}
That \cref{e:DiffSobolevBound} holds for a continuous
$C(\sigma)$ is classical, the integral giving a characterization of the
the $W^{\sigma, 1}$ semi-norm.
\ToDo{Elaine: I think we should add a comment about how
the blow up as $\sigma \to 1$ is easy to see by assuming
that $\varphi$ is smooth. But here is my argument:}
To show that $C(\sigma)$ must $\to \ensuremath{\infty}$ as $\sigma \to 1$,
assume that $\varphi$ is smooth.
Then there exists an open ball $B_R(z)$
of $\ensuremath{\BB{R}}^2$ such that
\begin{align*}
\sup_{y,z \in B_R(z)}
\frac{\abs{\varphi (y) - \varphi(z)}}{\abs{y - z}}
> a
\end{align*}
for some $a > 0$. Then
\begin{align*}
\int_{\ensuremath{\BB{R}}^2} \int_{\ensuremath{\BB{R}}^2}
\frac{\abs{\varphi (y) - \varphi(z)}}
{\abs{y - z}^{2 + \sigma}} \, dz \, dy
&\ge a \int_{B_R(z)}
\frac{1}
{\abs{y - z}^{1 + \sigma}} \, dz \, dy
= \frac{a R^{1 - \sigma}}{1 - \sigma}.
\end{align*}
Hence, $C(\sigma)$ cannot remain finite as $\sigma \to 1$.
\ToDo{This doesn't show that $C(\sigma) = C/(1 - \sigma)$, though.} \end{proof} }
\appendix
\section{Examples of growth bounds}\label{S:Corollary}
\noindent \begin{proof}[Proof of \cref{C:MainResult}] We consider $h(r) = h_1(r) = (1 + r)^\ensuremath{\alpha}$ for some $\ensuremath{\alpha} \in [0, 1/2)$. Clearly, $h$ and so $h^2$ are increasing and both $h$ and $h^2$ are concave and infinitely differentiable on $(0, \ensuremath{\infty})$, and $h'(0) = \ensuremath{\alpha} < \ensuremath{\infty}$. This gives $(i)$ of \cref{D:GrowthBound} for $h$ and $h^2$. Then for $n = 1, 2$, \begin{align*}
\int_1^\ensuremath{\infty} \frac{h^n(s)}{s^2} \, ds
&= \int_1^\ensuremath{\infty} \frac{(1 + s)^{n \ensuremath{\alpha}}}{s^2} \,ds
\le 2^{n \ensuremath{\alpha}} \int_1^\ensuremath{\infty} s^{n \ensuremath{\alpha} - 2} \,ds
= \frac{2^{n \ensuremath{\alpha}}}{1 - n \ensuremath{\alpha}}
< \ensuremath{\infty}, \end{align*} giving $(ii)$ of \cref{D:GrowthBound} for $h$ and $h^2$. It follows that $h$ is a well-posedness growth bound\xspace. (An additional calculation, which we suppress since we do not strictly need it, shows that $E(r) \le \mu(r) := C(\ensuremath{\alpha}) (1 + r^\ensuremath{\alpha}) r$, improving the coarse bound of \cref{L:EProp}.) \Ignore{ We now treat $(iv)$ of \cref{D:GrowthBound}. For $r \ge 1$, arguing as above, we have \begin{align*}
H(r)
&= H[h](r)
\le 2^\ensuremath{\alpha} \int_r^\ensuremath{\infty} s^{\ensuremath{\alpha} - 2} \,ds
= \frac{2^\ensuremath{\alpha}}{1 - \ensuremath{\alpha}} r^{\ensuremath{\alpha} - 1}
= C_\ensuremath{\alpha} r^{\ensuremath{\alpha} - 1}, \end{align*} where $C_\ensuremath{\alpha} := 2^\ensuremath{\alpha} (1 - \ensuremath{\alpha})^{-1}$. Thus, for $r \ge 1$, \begin{align*}
E(r)
&\le 2 (1 + r H(r^{\frac{1}{2}})^2) r
\le 2 (1 + C_\ensuremath{\alpha}^2 r^\ensuremath{\alpha}) r
\le C (1 + r^\ensuremath{\alpha}) r, \end{align*} where in the first inequality we used $(A + B)^2 \le 2(A^2 + B^2)$.
For $r < 1$, on the other hand, we have \begin{align*}
H(r)
&= H(1) + \int_r^1 \frac{(1 + s)^\ensuremath{\alpha}}{s^2} \,ds
\le C_\ensuremath{\alpha} + 2^\ensuremath{\alpha} \int_r^1 \frac{ds}{s^2}
= \frac{2^\ensuremath{\alpha}}{1 - \ensuremath{\alpha}} + \frac{2^\ensuremath{\alpha}}{r} - 2^\ensuremath{\alpha} \\
&\le \frac{\ensuremath{\alpha} 2^\ensuremath{\alpha}}{1 - \ensuremath{\alpha}} + \frac{2^\ensuremath{\alpha}}{r}
\le C \pr{1 + \frac{1}{r}}. \end{align*} So for $r < 1$, \begin{align*}
E(r)
&\le 2(1 + r H(r^{\frac{1}{2}})^2) r
\le 2 \pr{1 + C r \pr{1 + \frac{1}{r^{\frac{1}{2}}} }^2} r
\le 2 \pr{1 + C r \pr{1 + \frac{1}{r}}} r \\
&\le C (1 + r) r
\le C(1 + r^\ensuremath{\alpha}) r, \end{align*} since $r^\ensuremath{\alpha} > r$ for $r \in (0, 1)$. Therefore, we can set, for all $r \ge 0$, \begin{align}\label{e:omu1Bound}
\mu(r)
:= C_0(1 + r^\ensuremath{\alpha}) r \end{align} for a constant $C_0 = C(\ensuremath{\alpha})$ and have $E \le \mu$. Such a $\mu$ clearly satisfies $(iv)$ of \cref{D:GrowthBound}. }
Now assume that $h(r) = h_2(r) = \log^{\frac{1}{4}}(e + r)$. Then $h^2(r) = \log^{\frac{1}{2}}(e + r)$. Then $h^2$ is infinitely differentiable on $(0, 1)$, increasing, and concave and hence so also is $h = \sqrt{h^2}$, being a composition of increasing concave functions infinitely differentiable on $(0, 1)$. Also, \begin{align*}
(h^2)'(0)
&= \frac{1}{2} (x + e)^{-1} \log^{-\frac{1}{2}}(x + e)|_{x = 0}
= \frac{1}{2e} < \ensuremath{\infty}. \end{align*}
This gives $(i)$ of \cref{D:GrowthBound}.
Noting that $h_2(r) \le h_1(r) = (1 + r)^\ensuremath{\alpha}$ for any $\ensuremath{\alpha} > 0$, $(ii)$ and $(iii)$ of \cref{D:GrowthBound} follows for $h$ and $h^2$ from our result for $h_1$. Hence, $h = h_2$ is a well-posedness growth bound\xspace.
To obtain \cref{e:omuOsgoodAtInfinity}, we make the change of variables, $w = 1/s$, giving \begin{align*}
H(r)
= \int_0^{\frac{1}{r}} h^2(1/w) \, dw
:= \int_0^{\frac{1}{r}} \log^{\frac{1}{2}} \pr{e + \frac{1}{w}} \, dw. \end{align*}
Now, if $w \le 1/e$ then \begin{align*}
\log^{\frac{1}{2}} \pr{e + \frac{1}{w}}
\le \log^{\frac{1}{2}} \pr{\frac{2}{w}} \end{align*} so for $r \ge e$, \begin{align*}
H(r)
&\le \int_0^{\frac{1}{r}} \log^{\frac{1}{2}} \pr{\frac{2}{w}} \, dw
= \lim_{a \to 0^+}
\brac{w \sqrt{\log \pr{\frac{2}{w}}}
- \sqrt{\pi} \erf{} \sqrt{\log \pr{\frac{2}{w}}}}_a^{\frac{1}{r}} \\
&= \frac{1}{r} \sqrt{\log \pr{2 r}}
- \sqrt{\pi} \erf \sqrt{\log \pr{2 r}} + \sqrt{\pi}
= \frac{1}{r} \sqrt{\log \pr{2 r}}
+ \sqrt{\pi} \erfc \sqrt{\log \pr{2 r}}, \end{align*} where \begin{align*}
\erf(r)
:= \frac{2}{\sqrt{\pi}} \int_0^r e^{-s^2} \, ds, \qquad
\erfc(r) = 1 - \erf(r). \end{align*} From the well-known inequality, $
\erfc(r) \le e^{-r^2}, $ it follows that for $r \ge e$, \begin{align*}
H(r)
\le \frac{1}{r} \sqrt{\log \pr{2 r}}
+ \frac{\sqrt{\pi}}{2r}. \end{align*}
If $w > 1/e$ then \begin{align*}
\log^{\frac{1}{2}} \pr{e + \frac{1}{w}}
\le \log^{\frac{1}{2}} \pr{2 e} \end{align*} so that for $r < e$, \begin{align*}
H(r)
&= \int_0^{\frac{1}{e}} h^2(1/w) \, dw
+ \int_{\frac{1}{e}}^{\frac{1}{r}} h^2(1/w) \, dw
\le H(e)
+ \int_{\frac{1}{e}}^{\frac{1}{r}} \log^{\frac{1}{2}}
\pr{2 e} \, dw \\
&\le \frac{1}{e} \sqrt{\log \pr{2 e}}
+ \frac{\sqrt{\pi}}{2 e}
+ \log^{\frac{1}{2}} \pr{2 e} \frac{e - r}{e r}
= \frac{\sqrt{\pi}}{2 e}
+ \frac{\log^{\frac{1}{2}} \pr{2 e}}{r}
\le 2 \frac{\log^{\frac{1}{2}} \pr{2 e}}{r}.
\end{align*} In the final inequality, we used that \begin{align*}
\frac{\sqrt{\pi}}{2 e}
< 1
< \frac{\log^{\frac{1}{2}} \pr{2 e}}{r}. \end{align*}
We see, then, that an inequality that works for all $r > 0$ is \begin{align*}
H(r)
\le 2 \frac{\sqrt{\log \pr{2 e + 2 r}}}{r}. \end{align*}
This bound on $H$ gives \begin{align*}
E(r)
&\le 2(1 + r H(r^{\frac{1}{2}})^2) r
\le 2(1 + 4 \log(2 e + 2 r^{\frac{1}{2}})) r \\
&\le C \pr{1 + \log (e + r)} r
=: \mu(r). \end{align*}
Then $\mu(0) = 0$, $\mu$ is continuous, and $\mu$ is convex, since $\mu''(r) = C (r + 2e)/(e + r)^2 > 0$.
Finally, \begin{align*}
\int_1^\ensuremath{\infty} &\frac{dr}{\pr{1 + \log (e + r)} r}
\ge \frac{1}{2} \int_1^\ensuremath{\infty} \frac{dr}{\pr{\log (e + r)} r}
\ge \frac{1}{2} \int_{e + 1}^\ensuremath{\infty} \frac{dr}{r \log r} \\
&= \frac{1}{2} \int_{\log(e + 1)}^\ensuremath{\infty} \frac{d x}{x}
= \ensuremath{\infty}, \end{align*} where we made the change of variables, $x = \log r$. This shows that \cref{e:omuOsgoodAtInfinity} holds. \end{proof}
\Ignore{ \begin{remark} If we define $\nu(r) = r^2 \mu(1/r)$ then condition (3) of \cref{T:Existence} becomes \begin{align*}
\int_0^1 \frac{dr}{\nu(r)} = \ensuremath{\infty}. \end{align*} That is, $\mu$ satisfies condition (3) of \cref{T:Existence} if and only if $\nu$ is Osgood. In the context of \cref{C:MainResult}, where $\mu(r) = 4 \pr{1 + \log (e + r)} r$, we have $
\nu(r)
= 4 r (1 + \log(e + r^{-1})), $ which is a log-Lipschitz modulus of continuity. \end{remark} }
\Ignore{ \begin{lemma}\label{L:ElainemuIsSubMult}
The function $\mu(r) = \pr{1 + \log (e + r)} r$ is sub-multiplicative. \end{lemma} \begin{proof} (This proof is elementary if annoying; probably don't include in the submitted version.) Let $a, b > 0$. Then
\begin{align*}
\frac{\mu(a) \mu(b)}{\mu(ab)}
&= \frac{\pr{1 + \log (e + a)} \pr{1 + \log (e + b)}}
{1 + \log (e + ab)}
\ge \frac{1 + \log (e + a) \log (e + b)}
{1 + \log (e + ab)}.
\end{align*}
Hence,
\begin{align*}
\frac{\mu(a) \mu(b)}{\mu(ab)} \ge 1
&\impliedby \log (e + a) \log (e + b) > \log (e + ab) \\
&\iff F(a, b)
:= \log \log (e + a) + \log \log (e + b)
- \log \log (e + ab) > 0.
\end{align*}
Fix $b$ and define $F_b(a) = F(a, b)$.
Now, $F_b(0) = \log \log (e + b) > 0$.
Noting that
\begin{align*}
F(a, b)
= \log \frac{\log (e + a) \log (e + b)}
{\log(e + ab)},
\end{align*}
applying L'Hospital's rule yields
\begin{align*}
\lim_{a \to \ensuremath{\infty}}
&F_b(a)
= \lim_{a \to \ensuremath{\infty}}
\log \pr{\log (e + b)\lim_{a \to \ensuremath{\infty}}
\frac{\log (e + a)}
{\log (e + ab)}} \\
&= \lim_{a \to \ensuremath{\infty}}
\log \pr{\log (e + b)\lim_{a \to \ensuremath{\infty}}
\frac{e + ab} {(e + a) b}}
= \log \log (e + b)
> 0.
\end{align*}
The extrema of $F_b$ occur at $a = a_0$ (which we do not claim to be
unique), where
\begin{align*}
0
&= F_b'(a_0)
= \frac{1}{(e + a_0) \log(e + a_0)}
- \frac{b}{(e + a_0 b) \log(e + a_0 b)}.
\end{align*}
At $a = a_0$, we have
\begin{align*}
\frac{\log(e + a_0)}{\log(e + a_0 b)}
= \frac{e + a_0 b}{eb + a_0 b}
\end{align*}
so that
\begin{align*}
F_b(a_0)
&= \log \pr{\log (e + b) \frac{e + a_0 b}{eb + a_0 b}}
= \log \log (e + b) - \log \frac{e + a_0 b}{eb + a_0 b}.
\end{align*}
Now, when $b \ge 1$,
\begin{align*}
\log \frac{e + a_0 b}{eb + a_0 b}
\le \log 1
= 0.
\end{align*}
Hence, for $b \ge 1$ we always have $F_b(a) > 0$. It follows that
$F(a, b) > 0$ for all $a > 0$, $b \ge 1$. Because $F(b, a) = F(a, b)$
it follows that $F(a, b) > 0$ for all $b > 0$, $a \ge 1$ as well.
It remains only to show that $F(a, b) > 0$ for all $0 < a, b < 1$.
But in this case, $ab < a$ so $F(a, b) > \log \log(e + b) > 0$
follows immediately from the definition of $F$.
We conclude that $\mu$ is sub-multiplicative. \end{proof} }
\end{document} | arXiv |
The Sierpiski triangle (sometimes spelled Sierpinski), also called the Sierpiski gasket or Sierpiski sieve, is a fractal attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles.Originally constructed as a curve, this is one of the basic examples of self-similar setsthat is, it is a mathematically.
Sierpinski Triangles. Fraction Worksheet Graph Equation Pairs Darci Lynne Phone Number Quadratic teaching resources for KS3 / KS4 If you have a small class, this is all you need pdf Number line - decimals pdf Number line - decimals. Reduce the triangle to 1/2 height and 1/2 wide, make three copies, and place the three shrunken triangles so that each triangle touches the other two triangles in one corner (image 2). for example, If a 1-D object has 2 copies, then there will be 4 copies for the 2-D object, and 8 copies for 3-D object, like a 2X2 rubiks cube. The Sierpiski triangle (sometimes spelled Sierpinski), also called the Sierpiski gasket or Sierpiski sieve, is a fractal attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles.Originally constructed as a curve, this is one of the basic examples of self-similar setsthat is, it is a mathematically. Task: Solving Quadratics "Cutting Corners" -Triangles #4 Hard Factoring Puzzle - Dominos #5 Matching Quadratics: Factor, solutions, and graphs #6 Quadratic Formula Song & Video #7 Graphing Quadratic Equations & Practice with Feedback #8 Practice Determining Key Features of Parabolas with Feedback #9 Add, Subtract, Multiply, . 17. 78557 is the smallest Sierpinski number but it is a conjecture (not yet proved). Start with a triangle. Search: Puzzle Dominoes Quadratic Equations. triangle that is unshaded 0 1 1 3 4 2 3 4 5 n (c) The Sierpinskis triangle is the area of the triangle that is left after the shaded triangles are removed, i.e., the unshaded part of the triangle. 3 . 2 . To illustrate the principle of fractals, we will create a simple (and famous) one. . Step 4: In cell C2, add this formula: =MOD ( (C1+B1),2) and then drag it across the row to cell AG2. 3. Note that this infinite process is not dependent upon the starting shape being a triangle-it is just clearer that way. The first few steps starting, for example, from a square also tend towards a Sierpinski triangle. Michael Barnsley used an image of a fish to illustrate this in his paper "V-variable fractals and superfractals." Interpreters with poor memory handling may not work with anything over 3, though, and a It is interesting, though, that your simple question is hard to answer Source: Joshua Nelson Follow up to Puzzle 910 Linear quadratic equation simultaneous, solution manual for abstract algebra done right 2nd edition, Algebra Example Problems, quadratic equations with cubes pdf Number line - decimals pdf Another Way to Create a Sierpinski Triangle- Sierpinski Arrowhead Curve. What is the formula for sierpinski triangle? the formula for the unshaded area is n=3*x. Wiki User. The Sierpinski triangle (also with the original orthography Sierpiski), also called the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles.
Start by labeling p1, p2 and p3 as the corners of the Sierpinski triangle, and a random point v1. The problem of drawing the Sierpinski triangles is considered to be advanced problem and it really is.. We now have 9 (unshaded) equilateral triangles of. Sierpinski Triangle will be constructed from an equilateral triangle by repeated removal of triangular subsets. 2011-03-08 09:53:48. The total number of shaded triangles in the first 10 Sierpinski triangles is 29,524. After an infinit number of iterations the remaining area is 0. Click and drag the Sierpinski Tetrahedron above to rotate it. The Sierpinski Triangle. A Sierpinski triangle is a fractal in the shape of an equilateral triangle which is recursively subdivided into smaller and small triangles. The removal method is based on the finite subdivision rule and conceptually is the easiest to understand and reproduce. stage 1 stage 2 stage 3 a) If the process continues indefinitely, the stages get closer to the Sierpinski gasket. This answer is: It is easy to check that the dimensions of the triangles that remain after the Nth iteration are exactly 1/2^N of the original dimensions. The Sierpinski Triangle The number of triangles after n iterations is 3n. Thus we can express the total perimeter of the triangle as a function of number of iteration, as shown below: $$P_1 =P_{0}\times (\frac {3}{2})^n $$ From this expression we can see that the total perimeter length of a Sierpinski triangle is infinite. The formula for dimension $d$ is $n = m^d$ where $n$ is the number of self similar pieces and $m$ is the magnification factor. + Leave the one in the middle blank. Significato di sierpinski nel dizionario polacco con esempi di utilizzo. Answer. With this activity, students will solve quadratic equations (problems vary in difficulty but include factoring, finding roots, and/or adding/subtracting) in order to complete a domino-shaped puzzle! stage 1 stage 2 stage 3 a) If the process continues indefinitely, the stages get closer to the Sierpinski gasket. com,1999:blog-8366069047841545568 Java program to calculate the area of a triangle when three sides are given or normal method There are lots of programming exercises in Java, which involves printing a particular pattern in In Floyd triangle, there are n integers in the nth row and a total of (n(n+1))/2 integers in n rows Task: Solving Quadratics "Cutting Corners" -Triangles #4 Hard Factoring Puzzle - Dominos #5 Matching Quadratics: Factor, solutions, and graphs #6 Quadratic Formula Song & Video #7 Graphing Quadratic Equations & Practice with Feedback #8 Practice Determining Key Features of Parabolas with Feedback #9 Add, Subtract, Multiply, . Use this scavenger hunt when going over the basic concepts of finding factors of polynomials where the leading coefficient =1 This is a "domino square If you know how to solve regular One-Step Equations , Two-Step Equations , and Multi-Step Equations , the process of solving literal equations is very similar pdf Simplifying 2. i.e. Find the total number of triangles that remain after step 2 and after step 3. 2011-03-08 09:53:48. Visual Studio can be used to create similar turtle-like graphics in C++ using the provided class Turtle.cpp. Clearly, 9 triangles remain at this stage. the sequence of the Hanoi graph becomes more visible as an analog of the Sierpinski triangle. Take any equilateral triangle . Advertisement. From this expression we can see that the total perimeter length of a Sierpinski triangle is infinite. We can verify this by taking the limit of our perimeter function. Each iteration of the construction process reduces the area by 1/4. 4.
5.0 /5. 17. Search: Puzzle Dominoes Quadratic Equations. 3. b) If the triangle in the first stage has an area of 80 cm 2, what is the area of the shaded portion of the 20th stage? Wacaw Franciszek Sierpiski (1882 1969) was a Polish mathematician. Shrink the triangle to half height, and put a copy in each of the three corners. b) If the triangle in the first stage has an area of 80 cm 2, what is the area of the shaded portion of the 20th stage? A Sierpinski triangle is a geometric figure that may be constructed as follows: Draw a triangle. 84 puzzler 1: Number the intersections of these five circles with the integers 1 to 20 so that the points on each circle sum to the same . Since the number of triangles is multiplied by 3 each time an iteration occurs, the number of triangles present at any given step can be calculated by n=3^n. If the first point v1 was a point on the Sierpiski triangle, then all the points vn lie on the Sierpinski triangle. Thus the Sierpinski triangle has Hausdorff dimension log(3)/log(2) = log 2 3 1.585, which follows from solving 2 d = 3 for d. The area of a Sierpinski triangle is zero (in Lebesgue measure). May 3, 2019 - This activity helps students practice factoring trinomials: 5 problems have a leading coefficient of one and 4 problems have a leading coefficient greater than one Introduction to Quadratic Equations - lesson plans and activities - Project Maths; Form and solve an equation - Median Don Steward; Factorising and solving Shade these triangles. Step 3: In cell B2, add the value 1. 6. Divide it into 4 smaller congruent triangle and remove the central triangle . Copy. How many shaded triangles would be present in the sixth stage? Steps for Construction : 1 . The Sierpinski Triangle. This type of equation is known as a quadratic equation You can play it alone or in teams Number 1 from Pam Wilson The Quadratic Formula is a great method for solving any quadratic equation Utah Wildlife Gov Some problems may belong to more than one Some problems may belong to more than one. Of the remaining 3 triangles remove again their middles. It is called a covering set because every number of the form 78557*2 n +1 is divisible by one of these small primes. 32, 27, 20, ; 14 The final box will contain the word "END" So, why waste our time learning ancient and defunct computer methods (think: long division on pencil and paper, quadratic equations, etc Example 3: Find all solutions to the following equations 2 Just like for differential equations, finding a solution might be tricky, but checking Rule 2: x=0.5*x+.25 y=0.5*y+ sqrt (3)/4. 2 best worksheets for kindergarten Then using that n 4 , ( n + 32, 27, 20, ; 14 Magic Square Puzzles - Printable Math Worksheets at Joseph found a different puzzle: Make a 7x7 square out of 8 dominoes, 12 1-2 kites, and 3 1-3 kites txt) or read online for free The Quadratic Formula is a great method for solving any quadratic equation Each domino has two ends Each domino has two ends. See more ideas about math lessons, math classroom, education math If we are able to show that the three corresponding sides are congruent, then we have enough information to prove that the two triangles are congruent because of the SSS Postulate! The Sierpinski Triangle is usually described just as a set: Remove from the initial triangle its "middle", namely the open triangle whose vertices are the edge midpoints of the initial triangle. If you are counting the white triangles as well, there are (3^n -1)/2 Triangles total, meaning there are 13 triangles in the third stage. In fact, the nth stage would have 3^(n-1) triangles. Search: Puzzle Dominoes Quadratic Equations. The first three stages are shown. Is there a pattern? Search: Puzzle Dominoes Quadratic Equations. How tall is the flagpole? side length 2 inches and ( 3 4 ) 2 or 9 16 . Answer (1 of 2): 9. Search: Puzzle Dominoes Quadratic Equations.
The principal step can be repeated an infinite number of times, with the remaining triangles. Sierpinskis triangle can be implemented in MATLAB by plotting points iteratively according to one of the following three rules which are selected randomly with equal probability. A Sierpinski triangle shows a well-known fractal structure. the formula for the unshaded area is n=3*x. Wiki User. 5. Say the initial triangle has area 1. The Sierpinski Triangle is a fractal named after a Polish mathematician named Wacaw Sierpinski, who is best known for his work in an area of math called set theory. This type of equation is known as a quadratic equation You can play it alone or in teams Number 1 from Pam Wilson The Quadratic Formula is a great method for solving any quadratic equation Utah Wildlife Gov Some problems may belong to more than one Some problems may belong to more than one. How many shaded triangles would be present in the sixth stage? Create a 4th Order Sierpinsky Triangle. The Sierpinski Tetrahedron (sometimes called the Tetrix) is created by starting with a tetrahedron and removing the middle tetrahedron, and then repeating this process, just as we removed the middle triangles to form the Sierpinski Triangle. Search: Puzzle Dominoes Quadratic Equations. remaining triangles Mathematical aspects: The area of the Sierpinski Triangle approaches 0. The first three stages are shown. We can decompose the unit Sierpinski triangle into 3 Sierpinski triangles, each of side length 1/2 (0, 0) (1, 0) (, 3) public class Triangle { RED); StdDraw Python es un lenguaje de programacin interpretado de alto nivel y multiplataforma (Windows, MacOS, Linux) java by extracting the StdDraw java by extracting the StdDraw. Significato di sierpinski nel dizionario polacco con esempi di utilizzo. 10. Repeat step 2 for the smaller triangles, again and again, for ever! Here is how you can create one: 1. Some problems may belong to more than one The area of a Sierpinski triangle is zero (in Lebesgue measure). In order to calculate the dimension $d$ from the formula $n=m^d$ , where $n$ is the number of self similar pieces and $m$ is the magnification factor, I see that for this problem $n=3$ and $m = 2$. This is a version of the cellular automaton (rule 90) construction.The order, N, is specified by the first number on the stack.It uses a single line of the playfield for the cell buffer, so the upper limit for N should be 5 on a standard Befunge-93 implementation. Answer (1 of 3): The Sierpinski triangle: It is a fractal described in 1915 by Waclaw Sierpinski. In addition, ActiveCode 1 has a function that draws a filled triangle using the begin_fill and end_fill turtle methods. The procedure of constructing the triangle with this formula is called recursion. We start with an ordinary equilateral triangle: Then, we subdivide it into three smaller trangles, like this: The subdivison of the triangle into three smaller triangles is the transformation that we are using here. Fibonacci Number Patterns Fibonacci Rabbits The Golden Ratio > A Surprising Connection The Golden Angle Contact Subscribe Sierpinski. 47. What is the formula for sierpinski triangle? Wiki User. Sierpinski Triangle. See Figure 3. 33emy33. For instance, check this animation between two triangles with 1.000 and 50.000 points: 1.000 points. That is if youre only counting the black triangles. tested for 40K with increased Java VM heap size ? Draw the points v1 to v. Search: Puzzle Dominoes Quadratic Equations.
The area of the Sierpinski Triangle is zero, and the triangle has an infinite boundary and a fractional Hausdorff dimension of 1.5, somewhere between a one dimensional line and a two dimensional plane. Those characteristics may seem counterintuitive or even impossible, but these properties are what make the Sierpinski Triangle so interesting. (The rst time this is asked is after 2 iterations, for a total of 9 unshaded triangles). Watch the video below to discover how to use logs in solving more complex exponential equations Equation 2 will be steeper than Equation 1 (for the same x value Equation 2s y value will be double that of Equation 1) Solve quadratic equations A cartesian puzzle out the dominoes and complete the square to match the expressions and make a continuous loop of
check. A Sierpinski triangle is a geometric figure that may be constructed as follows: Draw a triangle. Properties of Sierpinski Triangle. The sierpinski function relies heavily on the getMid function. And here's how the triangle looks after a specific number of iterations: 100 iterations. Math 10C Systems of Linear Equations; Math 20-1 Algebra has a reputation for being difficult, but Math Games makes struggling with it a thing of the past Graph Equation Pairs 4m 2 5m + 3 = 0 Math Expression Renderer, Plots, Unit Converter, Equation Solver, Complex Numbers, Calculation History Math Expression Renderer, Plots, Unit Set vn+1 = 1 / 2 (vn + prn), where rn is a random number 1, 2 or 3. Start by labeling p1, p2 and p3 as the corners of the Sierpinski triangle, and a random point v1. Set vn+1 = 1 / 2 (vn + prn), where rn is a random number 1, 2 or 3. Draw the points v1 to v. If the first point v1 was a point on the Sierpiski triangle, then all the points vn lie on the Sierpinski triangle. Define the depth of a Sierpinski triangle as the number of directly nested triangles at the deepest point. FlexBook Platform, FlexBook, FlexLet and FlexCard are registered trademarks of CK-12 Foundation. The perimeter of the triangle increases by a factor of $\frac {3}{2} $. This example iterates Sierpinsky algorithm for 4 iterations and draws it on a 400- by 400-pixel canvas. Search: Stddraw Java Triangle. Sinonimi e antonimi di sierpinski et traduzioni di sierpinski verso 25 lingue. At the next iteration, 27 small triangles, then 81, and, at the Nth stage, 3^N small triangles remain. We start with an equilateral triangle, which is one where all three sides are the same length: Sierpinski Triangle 1000x1000px Level Of Recursion: 10 Main.java If we are able to show that the three corresponding sides are congruent, then we have enough information to prove that the two triangles are congruent because of the SSS Postulate! The Sierpinski triangle has Hausdorff dimension log(3)/log(2) 1.585, which follows from the fact that it is a union of three copies of itself, each scaled by a factor of 1/2. It is an OEIS sequence A076336.The first number of the sequence i.e.
puzzle (153) Pythagoras (12) quadratic equations (7) rates (1) ratio and proportion (6) rectangles (1) rectangular prism (1) reflection (2) rich tasks (103) Roman numerals (1) roots (8) rotation (1) rounding (1) scientific notation (5) sequences (6) series and sequences (7) sets (6) Sierpinski (1) sigma notation (1) significant figures (1 There Divide the next smaller coloured triangles into fours, leaving the center of each blank. The Sierpinski's triangle has an infinite number of edges. Start with one line segment, then replace it by three segments which meet at 120 degree angles. Hundreds of randomly generated questions and answers (If a = 0, the equation becomes a linear equation Many students see the efficiency of using the Quadratic Formula and then begin using it for every quadratic equation but I stress the importance of staying flexible Magic Square Puzzles - Printable Math Worksheets at Joseph found a different puzzle: Make a 7x7 square out of 8 It consists of an equilateral triangle, with smaller equilateral triangles recursively removed from its remaining area. Again, leave the middle triangle of each set blank. The first and last segments are either parallel to the original segment or meet it at 60 degree angles. Alternatively, the Sierpinski triangle can be created using the explicit formula An=1*3 (n-1), where (n-1) is the exponent. 50.000 points. 11.Can you find a formula to determine the total number of triangles that will remain after any given step? If the first point v1 was a point on the Sierpiski triangle, then all See Answer. Study now. The formula to count Sierpinski triangle is 3^k-1.It is good if you don't take the event when k=0.But how can you write a more precise formula that takes the k=0 into account which gives 3^-1?Just to note, I did figure out the equation myself as I learned it to write a program although the equation is available online.I am doing it purely for fun and out of curiosity, no homework Sierpinski-Triangle-lesson.doc - Sierpinski\u2019s Triangle Reporting Category Geometry Fractals and Engineering Primary SOL 3.14 4.10 4.12 4.15 5 17 There are many different ways of constructing the Sierpinski triangle. of the original area remaining.
sierpinski triangle formula number of unshaded triangles
Uptown Cheapskate Round Rock
Civil Lawyer Salary Near Texas
How Does A Bail Bondsman Make Money
Datatables Sum Multiple Columns
5 Senses Restaurant Menu
Masters Swimming Scottsdale
Oriental Magpie-robin Call Singapore
sierpinski triangle formula number of unshaded triangles 2022 | CommonCrawl |
Back-and-forth method
In mathematical logic, especially set theory and model theory, the back-and-forth method is a method for showing isomorphism between countably infinite structures satisfying specified conditions. In particular it can be used to prove that
• any two countably infinite densely ordered sets (i.e., linearly ordered in such a way that between any two members there is another) without endpoints are isomorphic. An isomorphism between linear orders is simply a strictly increasing bijection. This result implies, for example, that there exists a strictly increasing bijection between the set of all rational numbers and the set of all real algebraic numbers.
• any two countably infinite atomless Boolean algebras are isomorphic to each other.
• any two equivalent countable atomic models of a theory are isomorphic.
• the Erdős–Rényi model of random graphs, when applied to countably infinite graphs, almost surely produces a unique graph, the Rado graph.
• any two many-complete recursively enumerable sets are recursively isomorphic.
Application to densely ordered sets
As an example, the back-and-forth method can be used to prove Cantor's isomorphism theorem, although this was not Georg Cantor's original proof. This theorem states that two unbounded countable dense linear orders are isomorphic.[1]
Suppose that
• (A, ≤A) and (B, ≤B) are linearly ordered sets;
• They are both unbounded, in other words neither A nor B has either a maximum or a minimum;
• They are densely ordered, i.e. between any two members there is another;
• They are countably infinite.
Fix enumerations (without repetition) of the underlying sets:
A = { a1, a2, a3, ... },
B = { b1, b2, b3, ... }.
Now we construct a one-to-one correspondence between A and B that is strictly increasing. Initially no member of A is paired with any member of B.
(1) Let i be the smallest index such that ai is not yet paired with any member of B. Let j be some index such that bj is not yet paired with any member of A and ai can be paired with bj consistently with the requirement that the pairing be strictly increasing. Pair ai with bj.
(2) Let j be the smallest index such that bj is not yet paired with any member of A. Let i be some index such that ai is not yet paired with any member of B and bj can be paired with ai consistently with the requirement that the pairing be strictly increasing. Pair bj with ai.
(3) Go back to step (1).
It still has to be checked that the choice required in step (1) and (2) can actually be made in accordance to the requirements. Using step (1) as an example:
If there are already ap and aq in A corresponding to bp and bq in B respectively such that ap < ai < aq and bp < bq, we choose bj in between bp and bq using density. Otherwise, we choose a suitable large or small element of B using the fact that B has neither a maximum nor a minimum. Choices made in step (2) are dually possible. Finally, the construction ends after countably many steps because A and B are countably infinite. Note that we had to use all the prerequisites.
History
According to Hodges (1993):
Back-and-forth methods are often ascribed to Cantor, Bertrand Russell and C. H. Langford [...], but there is no evidence to support any of these attributions.
While the theorem on countable densely ordered sets is due to Cantor (1895), the back-and-forth method with which it is now proved was developed by Edward Vermilye Huntington (1904) and Felix Hausdorff (1914). Later it was applied in other situations, most notably by Roland Fraïssé in model theory.
See also
• Ehrenfeucht–Fraïssé game
References
1. Silver, Charles L. (1994), "Who invented Cantor's back-and-forth argument?", Modern Logic, 4 (1): 74–78, MR 1253680
• Hausdorff, F. (1914), Grundzüge der Mengenlehre
• Hodges, Wilfrid (1993), Model theory, Cambridge University Press, ISBN 978-0-521-30442-9
• Huntington, E. V. (1904), The continuum and other types of serial order, with an introduction to Cantor's transfinite numbers, Harvard University Press
• Marker, David (2002), Model Theory: An Introduction, Graduate Texts in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98760-6
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Parasites & Vectors
Discovery of mosquitocides from fungal extracts through a high-throughput cytotoxicity-screening approach
Liang Jin1,2,
Guodong Niu1,3,
Limei Guan2,
Julian Ramelow4,
Zhigao Zhan2,
Xi Zhou2,5 &
Jun Li ORCID: orcid.org/0000-0002-2121-33931,3
Parasites & Vectors volume 14, Article number: 595 (2021) Cite this article
Mosquitoes transmit a variety of diseases. Due to widespread insecticide resistance, new effective pesticides are urgently needed. Entomopathogenic fungi are widely utilized to control pest insects in agriculture. We hypothesized that certain fungal metabolites may be effective insecticides against mosquitoes.
A high-throughput cytotoxicity-based screening approach was developed to search for insecticidal compounds in our newly established global fungal extract library. We first determined cell survival rates after adding various fungal extracts. Candidate insecticides were further analyzed using traditional larval and adult survival bioassays.
Twelve ethyl acetate extracts from a total of 192 fungal extracts displayed > 85% inhibition of cabbage looper ovary cell proliferation. Ten of these 12 candidates were confirmed to be toxic to Anopheles gambiae Sua5B cell line, and six showed > 85% inhibition of Anopheles mosquito cell growth. Further bioassays determined a LC50, the lethal concentration that kills 50% of larval or adult mosquitoes, of 122 µg/mL and 1.7 µg/mosquito, respectively, after 24 h for extract 76F6 from Penicillium toxicarium.
We established a high-throughput MTT-based cytotoxicity screening approach for the discovery of new mosquitocides from fungal extracts. We discovered a candidate extract from P. toxicarium that exhibited high toxicity to mosquito larvae and adults, and thus were able to demonstrate the value of our recently developed approach. The active fungal extracts discovered here are ideal candidates for further development as mosquitocides.
Mosquitoes transmit a variety of diseases such as malaria, dengue fever, and Zika virus. Malaria alone was responsible for approximately 409,000 deaths in 2019, according to a recent report from the World Health Organization. For decades, vector control and elimination strategies have been at the forefront of approaches used to decrease mosquito populations, interrupt the transmission cycles of vector-borne diseases, and reduce their occurrence [1,2,3]. Formerly, the highly controversial compound dichlorodiphenyltrichloroethane (DDT) was one of the prime pest control agents used in the fight against malaria [4]. Nowadays, DDT can only be used under very specific and critical conditions due to the harm it causes to the environment. Because of the limited number of mosquito molecules targeted by currently available insecticides, and the small number of available types of insecticide [5], insecticide resistance in mosquito populations has begun to accelerate worldwide, and is a major problem for malaria control [6]. In addition, only a few novel insecticides have been introduced into the market during the past 30 years [7, 8]. Therefore, researchers and public health authorities are eagerly awaiting the discovery of novel insecticides to enable the control of malaria vector populations with a high rate of success.
Natural resources are commonly exploited for novel drug research and development, and some insecticides have been discovered from plants and fungi [9, 10]. Pyrethrin, for example, which was identified from the plant Chrysanthemum cinerariifolium and subsequently used for the development of pyrethrin analogs (synthetic pyrethroids), has proven to be highly useful for the successful treatment of many insect pests [11]. Fungi offer many advantages over plants as sources of metabolites because (i) an enormous number of fungal species have been identified, and many more await discovery; (ii) they produce diverse secondary metabolites; (iii) their metabolites can be generated using large-scale fermentation approaches, which makes them very attractive for further development [12,13,14].
Several studies have reported that species of some fungal genera, e.g. Lagenidium, Coelomomyces, Conidiobolus, Entomophthora, Culicinomyces, Erynia, Beauveria, and Metarhizium, display a potent ability to kill many species of mosquitoes, including those of the genera Anopheles, Culex, and Aedes [15, 16]. Entomopathogenic fungi have also been widely utilized for the treatment of insect pests, especially in agriculture. Their life cycles are associated with the synthesis and secretion of different active metabolites, such as destruxins, efrapeptins, oosporein, beauvericin, and beauveriolides. The effects of these types of compounds on insects have been summarized by Strasser et al. [17]. Mycelial extracts of various fungi showed high cytotoxicity and toxicity to larval and adult stages of mosquitoes [18, 19]. Based on the available literature, we hypothesize that new, effective insecticides can be produced from fungal metabolites.
To date, the discovery of natural fungal metabolites that are insecticidal has been challenging, as very time-consuming bioassays are the most effective means of testing fungal samples on live insects. An efficient high-throughput screening approach is not yet available for this [20]. Additionally, the lack of a publicly accessible library of diverse fungal metabolite extracts contributes to this lack of discovery of new insecticides.
Our lab has recently established a large and diverse global fungal extract library (GFEL), which contains more than 10,000 fungal isolates and many more structurally diverse metabolites [21]. This library includes metabolites from some fungal species, e.g. Penicillium spp., Aspergillus spp., Fusarium spp., Podospora spp., Mucor spp., Cladosporium spp., and Stoloniferum spp. which have been reported to possess larvicidal and adulticidal activities [15, 16]. The GFEL provides researchers with an important resource for the discovery of novel insecticides and ultimately for the control of mosquito populations.
To overcome the time-consuming nature of bioassays, we developed a new high-throughput screening approach that is based on cytotoxic activity assays against major insect cell lines. This new approach allows the determination of candidate fungal extracts that have exhibited toxic properties in preliminary tests. Following the discovery of these candidate extracts, we then validated their insecticidal effects by using traditional bioassays against larval and adult stages of a mosquito. Our results demonstrate that this newly developed assay can be utilized for the initial screening of fungal extracts for metabolites that have potential as novel insecticides.
Rearing Anopheles gambiae
Anopheles gambiae G3 strain was obtained from BEI Resources (BeiResources.org). The strain was originally collected on McCarthy Island, Gambia, West Africa. It is a wild-type mosquito strain that has been used in research for more than 30 years. Anopheles gambiae G3 strain mosquitoes were reared in a closed Darwin growth chamber at constant 27 °C and 80% humidity under a 12-h light/12-h dark cycle and standard laboratory conditions. Mosquito larvae were fed with 0.05 mg ground fish food for nishikigoi (Hikari, Japan) per larva per day. The adult mosquitoes were kept on 10% sucrose and were occasionally blood-fed with human blood and serum obtained from a blood bank (Oklahoma Blood Institute, Oklahoma City, OK) for egg production.
Cell culturing
We used the BTI-Tn-5B1-4 cell line [High Five Cells (Hi-5)], which is derived from ovarian cells of the cabbage looper (Trichoplusia ni) and is the standard insect cell line commonly used in many laboratories. We also chose to use the An. gambiae Sua5B cell line as these cells are immunocompetent and hemocyte-like. The Hi-5 and Sua5B cells were cultured in Express Five SFM medium (Invitrogen) and Grace's Supplemented Insect Medium (Gibco, Waltham, MA), respectively, at 27 °C, as described previously [22]. The Sua5B cell medium also contained 10% heat-inactivated fetal bovine serum (Gibco); the cell lines were generally passaged twice per week upon reaching 90% confluency.
Fungal culture and metabolite extraction process
A global fungal library was recently generated and reported in the literature [21]. In summary, 2395 soil samples and 2324 plant samples were collected from 36 regions of Africa, Asia, and North America. About 10,000 fungal strains were isolated from these samples. The fungi were cultured on a large scale, as described previously in detail [23]. In short, the fungi were cultured with 500 g of Cheerios breakfast cereal (General Mills, Minneapolis, MN). The cereal was sterilized, dried, and mixed with 1 L of sterile 0.3% sucrose solution containing 0.005% chloramphenicol. The fungal cultures were incubated at 27 °C for 4 weeks in a mushroom bag, and fungal metabolites in the solid culture were extracted with 2 L of ethyl acetate. Ethyl acetate enables the extraction of only small molecules, and large molecules such as DNA and proteins are eliminated.
MTT assays for cell survival measurement
MTT is a compound that acts as a hydrogen ion acceptor in the respiratory chain in the mitochondria of living cells. After entering a living cell, MTT is reduced into formazans, which are water-insoluble blue-purple crystalline structures that are deposited in the cell. It is important to note that this reaction does not occur in dead cells, and that dimethyl sulfoxide (DMSO) can dissolve formazans in cells. The light absorption at 570-nm wavelength is used to quantify formazan deposits. Thus, we deduced that the absorbance level at 570 nm corresponded to the number of living cells [24, 25].
The initial screening of the extracts was carried out with Hi-5 cells. About 2 × 104 Hi-5 cells were seeded in each well of a TC-treated 96-well plate (Corning, NY). After cell attachment, the initial seeding medium was removed and replaced with 99 µL of fresh Express Five SFM medium (Invitrogen). About 1 µL of fungal extract at 100 µg/mL final concentration dissolved in DMSO was added to each well. A 1-µL volume of DMSO was used as the negative control, to show that it did not kill the cells. We chose a lethal dosage of blasticidin, a peptidyl nucleoside antibiotic isolated from Streptomyces griseochromogenes that inhibits protein synthesis, at a final concentration of 50 µg/mL as the positive control. At this concentration, blasticidin is known to kill 100% of Hi-5 and Sua5B cells. The plates were then incubated at 27 °C for 24 h. Next, 10 µL of MTT dissolved in PBS at a concentration of 5 mg/mL was added to each well and incubated for 4 h at 37 °C with a 5% CO2 supply. The medium was then removed from each well and 100 µL acidic isopropanol (0.04 N HCl in isopropanol) solution was added and mixed thoroughly. The plate was incubated again for 10 min at 37 °C to dissolve formazan crystals. Absorbance measurement at 570 nm was carried out with an Epoch Microplate Spectrophotometer (BioTek, Winooski, VT). The data were analyzed with Prism 9.2 software using the ANOVA test (GraphPad, San Diego, CA). The following equations were used:
To calculate the inhibition activity of a single fungal extract on cell proliferation,
$${\text{Extract inhibition activity}} = \frac{{{\text{A}}_{570} {\text{ DMSO}} - {\text{A}}_{570} {\text{ Experimental}}}}{{{\text{A}}_{570} {\text{ DMSO}} - {\text{A}}_{570} {\text{ Blasticidin}}}} \times 100$$
To calculate the cell survival rate,
$${\text{Cell survival rate}} = \frac{{{\text{A}}_{570} {\text{ Experimental}} - {\text{A}}_{570} {\text{ Blasticidin}}}}{{{\text{A}}_{570} {\text{ DMSO}} - {\text{A}}_{570} {\text{ Blasticidin}}}} \times 100$$
As a subsequent validation step and to further confirm the cytotoxicity of candidates as potential mosquitocides, Sua5B mosquito cells were seeded at 2 × 104 per well and used in another MTT assay at 100 μg/mL final concentration. The overall procedure and analysis for the Sua5B assay were the same as those described above for the Hi-5 cells.
Larvicidal bioassays
Larval mortality bioassays were carried out according to standard protocols provided by the World Health Organization [26] with only slight modifications [27]. First, approximately 10 mL of distilled water was added to a 50-mL beaker. Then, each fungal extract dissolved in DMSO was added to 60-mm × 15-mm Petri dishes at final concentrations of 0, 50, 100, 200, 300, and 400 µg/mL. About 20 fourth-instar larvae (L4) were transferred to the Petri dishes and incubated for 24 h in a closed Darwin growth chamber at constant 27 °C and 80% humidity under a 12-h light/12-h dark cycle and standard laboratory conditions. Ground fish food was not supplied during this stage of the experimental process. Finally, the live and dead mosquito larvae were counted for each Petri dish and the following equation was used to determine extract toxicity:
$${\text{Toxicity}} \left( \% \right) = \frac{{{\text{Mortality rate of experimental}} - {\text{control mortality}}}}{{100 - {\text{control mortality}}}} \times 100$$
Three replicates per dose, including the negative control (1% DMSO without any fungal extract), were used in the same experimental setting, and three independent repeats were carried out for the dose–response assays. The lethal concentration of the extracts that killed 50% of the larvae (LC50) was calculated using Prism 9.2 (GraphPad Software, CA).
Synergetic effects of piperonyl butoxide on fungal candidate toxicity
We tested for a synergetic effect between the fungal extract candidates and piperonyl butoxide (PBO), a known cytochrome P450 inhibitor of the detoxification pathway in insect cells [36]. Firstly, the sub-lethal concentration of PBO against the larvae was examined by testing final concentrations of 1, 3, 4, 5, 10, 50, and 100 µg/mL. Anopheles gambiae L4 larvae were sorted into glass Petri dishes filled with distilled water to which PBO was added at the above concentrations. Larval mortality was determined 24 h post-treatment. Next, 76F6 alone or 76F6 (both at 125 μg/mL final concentration) supplemented with PBO at the maximum sublethal dose was added to the distilled water. Water containing only PBO was used as the negative control. Next, 20 L4 larvae from each group were transferred into the Petri dishes and maintained in a closed Darwin growth chamber at constant 27 °C and 80% humidity under a 12-h light/12-h dark cycle and standard laboratory conditions. Mortality was recorded after 24 h and the experiments were repeated three times.
Pesticide bioassays for adult mosquitoes
Based on a recent approach to test pesticides via bioassays for adult mosquitoes [28], we determined the toxicity of candidate fungal extracts against adult female An. gambiae. Anopheles gambiae were maintained ad libitum on a sterile 10% sucrose solution. Naïve 3- to 5-day-old female mosquitoes were cold-anesthetized on ice and sorted into groups of 24 in a glass Petri dish to give experimental and control groups. About 0.5 µL of the fungal extract in acetone at a concentration of 5 µg/µL was deposited on the notum of a mosquito. Approximately 0.5 µL acetone was used as a control. There were about 12 female An. gambiae in each group. After successful application of the fungal extracts, the mosquitoes were transferred to a 5-ounce Solo waxed paper water cup (Dart Container, MI) and maintained on the sterile 10% sucrose diet.
To assess a possible dose-dependent response of the fungal extracts, new mosquitoes were cold-anesthetized on ice and sorted into groups of 12 females in a glass Petri dish, as described earlier. We dissolved the extracts in acetone to obtain final concentrations of 0.25, 0.5, 1, 2.5, 5, and 10 µg/mosquito, and applied the extracts to the nota, as described earlier. Post-treatment, the mosquitoes were transferred to a 5-ounce waxed paper cup and maintained on the sterile 10% sucrose diet. Mortality was recorded after 24 h. The experiments were conducted with triplicates per sample and performed as three independent repeats.
Identification of fungal species
The conserved sequences of the internal transcribed spacer (ITS) region of 5.8S and 28S ribosomal DNA were used to identify individual fungal species [21]. A small amount of mycelium (0.1–1 mg) was taken from fungus 76F6, rinsed in 400 μL sterilized water, and then collected by centrifugation at 15,000 g for 2 min. The mycelium was re-suspended in 100 μL sterilized water, 1 μL of which was used for PCR. The DNA fragments were amplified using ITS1 (TCCGTAGGTGAACCTGCGG) and ITS4 (TCCTCCGCTTATTGATATGC) primers [21]. The reaction was carried out under the following conditions: 94 °C for 2 min to denature DNA; 35 cycles of 94 °C for 30 s, 55 °C for 30 s, and 72 °C for 1 min; and 72 °C for 5 min to complete the reaction. The amplified PCR product was gel-extracted, purified, and sequenced (Eurofins Genomics, Louisville, KY). The raw sequence data were analyzed, and low-quality sequence ends removed. Subsequently, the edited sequences were compared with the National Center for Biotechnology Information database using BLAST to identify individual fungal species. Based on the ITS sequences, a phylogenetic tree was constructed via the neighbor-joining method using MEGA 6.0 software, as described previously [29].
P-values, LC50, LC90, confidence intervals, and slopes were calculated using previously reported approaches [30,31,32] implemented in Prism 9.2 software (GraphPad Software).
High-throughput MTT-based cytotoxicity assay screening for fungal extracts that inhibited cell survival
Our MTT-based cytotoxicity assay for high screening purposes was used to find insecticide candidates by quantifying living cells. Thus, we initially tested our GFEL for insecticide candidates on Hi-5 cells. Experimental optimization via MTT assays determined the toxic effects of the extracts to occur at a concentration of 100 μg/mL. Hence, 100 μg/mL was used as a standard concentration for our screening assays.
Our results demonstrated that, of 192 fungal extracts, 12 candidates had a > 85% inhibition rate for Hi-5 cells (Fig. 1). These fungal extracts were 73D12, 73E1, 73E6, 73E11, 76B8, 76C9, 76D4, 76D6, 76D7, 76E8, 76F6, and 76F11. These extracts were chosen for additional screening using mosquito Sua5B cells. Ten of these extracts significantly decreased the survival of Sua5B cells when compared to the negative control, DMSO (Fig. 2a; P < 0.05). Six of the extract candidates (76B8, 76C9, 76D7, 76E8, 76F6, and 76F11) exhibited very high toxicity, as only < 20% of Sua5B cells survived. The P-value was calculated for the effect of each fungal extract versus the DMSO control. Effect size vs P-value showed that these six candidate extracts indeed had a large effect on cell survival rate, with a ratio of > 6 (Fig. 2b; P < 0.001). Due to their high cytotoxicity and deleterious effect on cell survival, these six candidate insecticides were analyzed further.
Toxicity of the ethyl acetate extracts of the different fungi (100 μg/mL) against High Five Cells (Hi-5). The numbers indicate percentage inhibition of cell proliferation. Lighter purple colors indicate higher inhibition, darker purple colors indicate lower inhibition, and red font colors highlight >85% inhibition rate
Sua5B cell survival analysis of the twelve candidate fungal extracts. Anopheles gambiae Sua5B cell line (Sua5B) cell survival analysis for the 12 candidate fungal extracts. a Ten extracts significantly decreased the survival of Sua5B cells when compared to the negative control, dimethyl sulfoxide (DMSO), by t-test. There were three replicates for each treatment. b Effect size (fold change) of the individual extracts against Sua5B cells (log10 y-axis) with respective P-values. P = 0.05 served as the cutoff for statistical significance; *** P < 0.001, ** P < 0.01, * P < 0.05
We also measured the effects of different fungal extract concentrations on Sua5B with our MTT-based cytotoxicity assay. The toxic effects were highly dose-dependent, as elevated concentrations displayed increased toxicity and lower survival rates of cells when compared to the control, DMSO (Fig. 3a). Extracts 76F6 and 76F11 showed the most potent cytotoxicity, even at a low concentration of 1 μg/mL (Fig. 3b). At a final concentration of 100 μg/mL, all the extracts showed nearly complete inhibition of Sua5B cell growth. The half-maximal inhibitory concentration (IC50) of 76B8, 76C9, 76D7, 76E8, 76F6 and 76F11 was 12.3 μg/mL, 8.2 μg/mL, 5.7 μg/mL, 42.2 μg/mL, 0.47 μg/mL, and 1.0 μg/mL, respectively. We concluded that, among the candidate fungal extracts, 76F6, which had an IC50 of 0.47 μg/mL, was the most effective in inhibiting the proliferation of Sua5B cells.
The six selected candidate fungal extracts exhibited dose-dependent effects against Sua5B cell survival. For abbreviations, see Fig. 2
Candidate fungal extracts exhibited lethal effects on mosquito larvae
We further analyzed the six crude fungal extract candidates and investigated their larvicidal activities to determine their potential as possible insecticides. Fourth-instar larvae of An. gambiae were treated in distilled water containing one of the six fungal extracts at 400 μg/mL final concentration. These fungal extracts displayed various larvicidal activities (Fig. 4a). The toxicity of the 76F6 fungal extract at 400 μg/mL to mosquito larvae was close to 100%, while the toxicity of the other five fungal extracts against mosquito larvae was only < 40%. The experiments were repeated in triplicate and similar trends obtained. Next, we carried out a serial dilution of 76F6 from 50 μg/mL to 400 μg/mL final concentration. We determined that the effect of 76F6 was dose-dependent, and its LC50 and LC90 were 122 μg/mL and 295.8 μg/mL, respectively (Table 1; Fig. 4b). Thus, we can conclude that extract 76F6 is a prime fungal candidate for the control of An. gambiae larvae.
Larvicidal activity of candidate fungal extracts against fourth-instar Anopheles gambiae 24 h post-exposure. a The larvicidal activity of different fungal extracts (400 μg/mL) based on three replicates. b The larvicidal activity of fungal extract 76F6 was measured in triplicate and was dose-dependent
Table 1 Lethal concentration of fungal extract 76F6 that killed 50% of larval or adult stages of Anopheles gambiae (LC50) and LC90 values
Piperonyl butoxide shows synergetic toxic effects with 76F6 against An. gambiae larvae
We also evaluated the synergetic effect between extract 76F6 and PBO on the mortality of An. gambiae larvae. Cytochrome P450 enzymes detoxicate small molecular toxins and PBO is an inhibitor of these enzymes. Firstly, we performed a serial dilution of PBO in distilled water to determine the optimal sublethal dose for An. gambiae larvae where no larvae are killed. The maximum sublethal dosage of 76F6 alone was about 3 μg/mL, and PBO alone exhibited toxic effects at concentrations higher than 3 μg/mL (Fig. 5a). Next, the toxic effects of PBO and 76F6 in combination on An. gambiae larvae were examined. After a 24-h incubation, the survival rate of the 76F6-treated mosquito larvae in the presence of PBO was significantly decreased, from 54.2 to 29.9% (Fig. 5b; P < 0.03), thus the potency of 76F6 was higher in the presence of PBO. This indicates that synergetic compounds, such as PBO, can be used in combination with 76F6 to improve its toxicity against larvae. These results support our hypothesis that cytochrome P450s in An. gambiae participate in the detoxification of active small molecules in extract 76F6.
Synergetic effects of extract 76F6 and the cytochrome P450 inhibitor piperonyl butoxide (PBO) on mosquito larvae. a Larvae survival rates at various concentrations of PBO (n = 3). b In the presence of 3 μg/mL PBO, extract 76F6 (125 μg/mL) killed significantly more larvae than when applied alone (P < 0.03; n = 3)
Candidate fungal extracts also exhibited lethal effects on adult mosquitoes
Following our discovery that fungal extract 76F6 could effectively kill mosquito larvae, we examined the effects of our six candidate fungal extracts on adult An. gambiae. Four of the six candidates caused below 50% survival after the application of 2.5 µg fungal extract/mosquito to the notum (Fig. 6a). Consistent with our earlier results demonstrating the high toxicity of fungal extract 76F6 to larvae, this extract greatly diminished adult mosquito survival to ~ 25%. Consequently, the toxicity of 76F6 at different concentrations was measured against adult An. gambiae. The results showed that the toxicity of 76F6 increased as its concentration increased (Fig. 6b). The LC50 of 76F6 on adult An. gambiae was 1.7 µg/mosquito, while the LC90 was 2.8 μg/mosquito (Table 1).
Survival rates of Anopheles gambiae adults after exposure to candidate fungal extracts. a Survival rates of 3- to 5-day-old adult An. gambiae treated with 2.5 μg/mosquito of the six tested extracts. b Dose-mortality curve of adult An. gambiae treated with extract 76F6. The experiments were conducted in triplicate and performed as three independent repeats. Data are means ± SEs
Examination of fungal extract 76F6 reveals a relationship with Penicillium toxicarium
We examined the morphology of several 76F6 fungal colonies on potato dextrose agar plates. After 7 and 30 days of incubation at 25 °C, the colonies attained a size of 50 mm and 65 mm in diameter, respectively. The colonies were slightly pink in pigmentation (Fig. 7a). Viewed from the reverse side of the plate, the colonies appeared to be light yellow in color (Fig. 7b). Conidia were present as long dry chains, and the conidiophores growing from the aerial mycelium were about 5 μM long (Fig. 7c). The mature conidia were round and approximately 2–4 μM in diameter (Fig. 7d). The morphology of this fungus was extremely similar to that of Penicillium spp. [33].
Morphology of candidate fungus 76F6. a Front view of a fungal colony. b Back view of a fungal colony. c, d Fungal conidia
Next, the ITS region of the 76F6 fungus was amplified and sequenced to identify the fungal strain and further verify its morphological similarity. The obtained ITS sequence has been submitted to GenBank (accession number MT072229). The BLAST tool indicated that the strain from which extract 76F6 was obtained is evolutionarily related to Penicillium toxicarium, with > 95% coverage and > 99% sequence identity (Fig. 8).
Phylogenetic analysis of the 18S ribosomal DNA sequences of the fungal strain from which extract 76F6 was obtained
The increasing resistance of mosquitoes to insecticides advocates for the discovery of novel mosquitocides. However, it is currently extremely difficult to screen a large number of samples within a short time using traditional bioassays. We established a high-throughput cytotoxicity assay against Hi-5 and Sua5B insect cells that was based on the original MTT assay to screen our fungal extract library for insecticidal potential [24]. The results showed that our MTT-based high-throughput cytotoxicity assay is a useful alternative to standard bioassays for the initial screening and identification of cytotoxic extracts used for downstream analysis. Hi-5 cells are commonly available in laboratories and generally used for protein expression, but also serve as an important model for mechanistic studies on a variety of insecticidal substances [34]. Hence, we used Hi-5 cells during the initial screening round. The subsequent cytotoxicity analysis of the candidate fungal extracts against Sua5B, a mosquito cell line that originates from An. gambiae hemolymph, was important to assess their effectivities as mosquitocides. These results also narrowed the list of candidates for the subsequent bioassays, which were very time-consuming. Finally, we identified 76F6 as a potential candidate for larval control from our initial MTT-based high-throughput cytotoxicity assay.
PBO is a known inhibitor of cytochrome P450 monooxygenases (P450s) [35]. P450s are well known for their abilities to detoxify many different insecticides against mosquitoes, including pyrethroids, DDT, and some organophosphate insecticides [36]. We additionally showed that, upon the addition of PBO, the toxicity of 76F6 increased significantly, indicating that the active compound in extract 76F6 is possibly a small molecule that may well pass through a cell's detoxifying pathway. The exact pathway of this compound and how it exhibits synergistic toxic effects with the addition of PBO remain unknown and thus need to be investigated in future studies.
In this study, we ultimately discovered that four of the fungal extract candidates were able to kill more than half of the adult mosquitoes at a dose of 2.5 µg/mosquito. It is noteworthy that fungal extract 76F6 exhibited a potent toxic effect on both mosquito larvae and adults, whereas extracts 76B8, 76E8, and 76F11 only had a very small effect on L4 larvae. However, these latter extracts showed medium-to-high toxic activity in adult mosquitoes. The reasons for this divergence may be the fluctuating stability of active compounds in water, the differential expression of targets in larval stages, the entry of active components into larvae, or the activity of detoxification mechanisms in larvae and adult mosquitoes. Future studies are required to determine the exact active chemical structures of these compounds and if they can potentially exhibit higher toxicity to L4 larvae and adult mosquitoes at higher concentrations.
Fungal extract 76F6, from a strain of P. toxicarium, was shown, to our knowledge for the first time, to kill both larval and adult An. gambiae. The LC50 of this extract for the L4 larvae was about 122 μg/mL, which is very similar to that of an ethyl acetate extract of a strain of Penicillium daleae on L4 larvae of Culex quinquefasciatus [19]. The phylogenetic analysis showed that these two strains of Penicillium only shared 26.8% identical ITS sequences, suggesting that their active compounds may be different. For further identification of its active compounds, the chemical structure of fungal extract 76F6 needs to be elucidated.
The MTT-based high-throughput cytotoxicity-screening assay for pesticides that we present here requires less labor than the other available high-throughput approaches for larvae [37]. It also proved to be extremely useful for the discovery of new insecticide candidates.
We successfully established a high-throughput MTT-based cytotoxicity screening approach to search for and discover new pesticides based on their cytotoxic activity. We successfully demonstrated the discovery of several mosquitocides via this method using our fungal extract library. One of the candidate insecticides, 76F6, an extract of P. toxicarium, was found to be a suitable candidate for mosquito control.
All data generated or analyzed during this study are included in this published article. The ITS sequence is available from GenBank (accession number MT072229).
Gari T, Lindtjorn B. Reshaping the vector control strategy for malaria elimination in Ethiopia in the context of current evidence and new tools: opportunities and challenges. Malar J. 2018;17(1):454. https://doi.org/10.1186/s12936-018-2607-8.
Sougoufara S, Ottih EC, Tripet F. The need for new vector control approaches targeting outdoor biting anopheline malaria vector communities. Parasit Vectors. 2020;13(1):295. https://doi.org/10.1186/s13071-020-04170-7.
Killeen GF, Tatarsky A, Diabate A, Chaccour CJ, Marshall JM, Okumu FO, et al. Developing an expanded vector control toolbox for malaria elimination. BMJ Glob Health. 2017;2(2): e000211. https://doi.org/10.1136/bmjgh-2016-000211.
van den Berg H, Manuweera G, Konradsen F. Global trends in the production and use of DDT for control of malaria and other vector-borne diseases. Malar J. 2017;16(1):401. https://doi.org/10.1186/s12936-017-2050-2.
Haddi K, Berger M, Bielza P, Cifuentes D, Field LM, Gorman K, et al. Identification of mutations associated with pyrethroid resistance in the voltage-gated sodium channel of the tomato leaf miner (Tuta absoluta). Insect Biochem Mol Biol. 2012;42(7):506–13. https://doi.org/10.1016/j.ibmb.2012.03.008.
Namountougou M, Simard F, Baldet T, Diabate A, Ouedraogo JB, Martin T, et al. Multiple insecticide resistance in Anopheles gambiae s.l. populations from Burkina Faso, West Africa. PLoS ONE. 2012;7(11): e48412. https://doi.org/10.1371/journal.pone.0048412.
Niu G, Wang B, Zhang G, King JB, Cichewicz RH, Li J. Targeting mosquito FREP1 with a fungal metabolite blocks malaria transmission. Sci Rep. 2015;5: 14694. https://doi.org/10.1038/srep14694.
Hemingway J. The role of vector control in stopping the transmission of malaria: threats and opportunities. Philos Trans R Soc Lond B Biol Sci. 2014;369(1645):20130431. https://doi.org/10.1098/rstb.2013.0431.
Atanasov AG, Zotchev SB, Dirsch VM, International Natural Product Sciences Taskforce, Supuran CT. Natural products in drug discovery: advances and opportunities. Nat Rev Drug Discov. 2021;20(3):200–16. https://doi.org/10.1038/s41573-020-00114-z.
Sharma SB, Gupta R. Drug development from natural resource: a systematic approach. Mini Rev Med Chem. 2015;15(1):52–7. https://doi.org/10.2174/138955751501150224160518.
Chrustek A, Holynska-Iwan I, Dziembowska I, Bogusiewicz J, Wroblewski M, Cwynar A, et al. Current research on the safety of pyrethroids used as insecticides. Medicina. 2018. https://doi.org/10.3390/medicina54040061.
Nielsen JC, Nielsen J. Development of fungal cell factories for the production of secondary metabolites: linking genomics and metabolism. Synth Syst Biotechnol. 2017;2(1):5–12. https://doi.org/10.1016/j.synbio.2017.02.002.
Hawksworth DL, Lucking R. Fungal diversity revisited: 2.2 to 3.8 million species. Microbiol Spectr. 2017. https://doi.org/10.1128/microbiolspec.FUNK-0052-2016.
Pham JV, Yilma MA, Feliz A, Majid MT, Maffetone N, Walker JR, et al. A review of the microbial production of bioactive natural products and biologics. Front Microbiol. 2019;10:1404. https://doi.org/10.3389/fmicb.2019.01404.
Scholte EJ, Knols BG, Samson RA, Takken W. Entomopathogenic fungi for mosquito control: a review. J Insect Sci. 2004;4:19. https://doi.org/10.1093/jis/4.1.19.
Baron NC, Rigobelo EC, Zied DC. Filamentous fungi in biological control: current status and future perspectives. Chil J Agr Res. 2019;79(2):9.
Strasser H, Vey A, Butt T. Are there any risks in using entomopathogenic fungi for pest control, with particular reference to bioactive metabolites of Metarhizium, Tolypocladium and Beauveria species. Biocontrol Sci Technol. 2000;10(6):717–35.
Bhimba BV, Franco DA, Jose GM, Mathew JM, Joel EL. Characterization of cytotoxic compound from mangrove derived fungi Irpex hydnoides VB4. Asian Pac J Trop Biomed. 2011;1(3):223–6. https://doi.org/10.1016/S2221-1691(11)60031-2.
Ragavendran C, Mariappan T, Natarajan D. Larvicidal, histopathological efficacy of Penicillium daleae against larvae of Culex quinquefasciatus and Aedes aegypti plus biotoxicity on Artemia nauplii a non-target aquatic organism. Front Pharmacol. 2017;8:773. https://doi.org/10.3389/fphar.2017.00773.
Vyas N, Dua KK, Prakash S. Larvicidal activity of metabolites of Metarhizium anisopliae against Aedes and Culex mosquitoes. Entomol Ornithol Herpetol. 2015;4(4):1–3.
Niu G, Annamalai T, Wang X, Li S, Munga S, Niu G, et al. A diverse global fungal library for drug discovery. PeerJ. 2020;8: e10392. https://doi.org/10.7717/peerj.10392.
Niu G, Cui Y, Wang X, Keleta Y, Li J. Studies of the parasite-midgut interaction reveal plasmodium proteins important for malaria transmission to mosquitoes. Front Cell Infect Microbiol. 2021;11: 654216. https://doi.org/10.3389/fcimb.2021.654216.
Niu G, Hao Y, Wang X, Gao JM, Li J. Fungal metabolite asperaculane B inhibits malaria infection and transmission. Molecules. 2020. https://doi.org/10.3390/molecules25133018.
Fallon AM, Hellestad VJ. Standardization of a colorimetric method to quantify growth and metabolic activity of Wolbachia-infected mosquito cells. In Vitro Cell Dev Biol Anim. 2008;44(8–9):351–6. https://doi.org/10.1007/s11626-008-9129-6.
Mosmann T. Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays. J Immunol Methods. 1983;65(1–2):55–63. https://doi.org/10.1016/0022-1759(83)90303-4.
World Health Organization. WHO: guidelines for laboratory and field testing of mosquito larvicides. Edited by WHO, vol. WHO/CDS/WHOPES/GCDPP/2005.13. 2005; 39. https://apps.who.int/iris/handle/10665/69101. Accessed 23 Nov 2021.
Govindarajan M, Benelli G. Facile biosynthesis of silver nanoparticles using Barleria cristata: mosquitocidal potential and biotoxicity on three non-target aquatic organisms. Parasitol Res. 2016;115(3):925–35. https://doi.org/10.1007/s00436-015-4817-0.
Agramonte NM, Bloomquist JR, Bernier UR. Pyrethroid resistance alters the blood-feeding behavior in Puerto Rican Aedes aegypti mosquitoes exposed to treated fabric. PLoS Negl Trop Dis. 2017;11(9): e0005954. https://doi.org/10.1371/journal.pntd.0005954.
Tamura K, Stecher G, Peterson D, Filipski A, Kumar S. MEGA6: molecular evolutionary genetics analysis version 6.0. Mol Biol Evol. 2013;30(12):2725–9. https://doi.org/10.1093/molbev/mst197.
Finney DJ. The adjustment for a natural response rate in probit analysis. Ann Appl Biol. 1949;36(2):187–95. https://doi.org/10.1111/j.1744-7348.1949.tb06408.x.
Finney DJ, Stevens WL. A table for the calculation of working probits and weights in probit analysis. Biometrika. 1948;35(Pts 1–2):191–201.
Worden AN. Toxicological methods. Toxicology. 1974;2(4):359–70. https://doi.org/10.1016/0300-483x(74)90029-8.
Houbraken J, Visagie CM, Meijer M, Frisvad JC, Busby PE, Pitt JI, et al. A taxonomic and phylogenetic revision of Penicillium section Aspergilloides. Stud Mycol. 2014;78:373–451. https://doi.org/10.1016/j.simyco.2014.09.002.
Huang XY, Li OWL, Xu HH. Induction of programmed death and cytoskeletal damage on Trichoplusia ni BTI-Tn-5B1-4 cells by azadirachtin. Pestic Biochem Physiol. 2010;98:289–95.
Niu GD, Johnson RMD, Berenbaum MRD. Toxicity of mycotoxins to honeybees and its amelioration by propolis. Apidologie. 2011;42:79–87.
David JP, Boyer S, Mesneau A, Ball A, Ranson H, Dauphin-Villemant C. Involvement of cytochrome P450 monooxygenases in the response of mosquito larvae to dietary plant xenobiotics. Insect Biochem Mol Biol. 2006;36(5):410–20. https://doi.org/10.1016/j.ibmb.2006.02.004.
Pridgeon JW, Becnel JJ, Clark GG, Linthicum KJ. A high-throughput screening method to identify potential pesticides for mosquito control. J Med Entomol. 2009;46(2):335–41. https://doi.org/10.1603/033.046.0219.
This work was supported by the National Institutes of Health (R01AI125657) and National Science Foundation Career Awards (1742644) to Jun Li. Dr. Liang Jin was a visiting scholar at Florida International University and was supported by the Jiangxi Province Natural Science Foundation, China (20192ACB20008).
Department of Biological Sciences, Florida International University, Miami, FL, 33199, USA
Liang Jin, Guodong Niu & Jun Li
Institute of Microbiology, Jiangxi Academy of Sciences, Nanchang, Jiangxi, China
Liang Jin, Limei Guan, Zhigao Zhan & Xi Zhou
Biomolecular Sciences Institute, Florida International University, Miami, FL, 33199, USA
Guodong Niu & Jun Li
Herbert Wertheim College of Medicine, Florida International University, Miami, FL, 33199, USA
Julian Ramelow
Wuhan Institute of Virology, Chinese Academy of Sciences, Wuhan, 430071, Hubei, China
Xi Zhou
Liang Jin
Guodong Niu
Limei Guan
Zhigao Zhan
LJ and GN conceived the concept, initiated the project, conducted the experiments, and wrote the first draft of the manuscript. LG screened the fungal library. JR reared the mosquitoes, produced the final graphs, and edited and proofread the final draft of the manuscript. ZZ and XZ carried out the data analysis. JL conceived the concept, designed the project, analyzed the data, and wrote the manuscript. All the authors read and approved the final manuscript.
Correspondence to Liang Jin or Jun Li.
Jin, L., Niu, G., Guan, L. et al. Discovery of mosquitocides from fungal extracts through a high-throughput cytotoxicity-screening approach. Parasites Vectors 14, 595 (2021). https://doi.org/10.1186/s13071-021-05089-3
Penicillium toxicarium
Dipteran vectors and associated diseases | CommonCrawl |
\begin{document}
\title{Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction}
\iftrue \author{\IEEEauthorblockN{Mohamed Suliman} \IEEEauthorblockA{\textit{Trinity College Dublin} \\ Dublin, Ireland} \and \IEEEauthorblockN{Douglas Leith} \IEEEauthorblockA{\textit{Trinity College Dublin} \\ Dublin, Ireland} } \fi
\maketitle
\begin{abstract} In this paper we present new attacks against federated learning when used to train natural language text models. We illustrate the effectiveness of the attacks against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app that has been an early adopter of federated learning for production use. We demonstrate that the words a user types on their mobile handset, e.g. when sending text messages, can be recovered with high accuracy under a wide range of conditions and that counter-measures such a use of mini-batches and adding local noise are ineffective. We also show that the word order (and so the actual sentences typed) can be reconstructed with high fidelity. This raises obvious privacy concerns, particularly since GBoard is in production use.
\end{abstract}
\section{Introduction} Federated Learning (FL) is a class of distributed algorithms for the training of machine learning models such as neural networks. A primary aim of FL when it was first introduced was to enhance user privacy, namely by keeping sensitive data stored locally and avoiding uploading it to a central server~\cite{mcmahan2017communication}. The basic idea is that users train a local version of the model with their own data, and share only the resulting model parameters with a central coordinating server. This server then combines the models of all the participants, transmits the aggregrate back to them, and this cycle (i.e a single FL `round') repeats until the model is judged to have converged.
A notable real-world deployment of FL is within Google's Gboard, a widely used Android keyboard application that comes pre-installed on many mobile handsets and which has $>5$ Billion downloads~\cite{gboardplay22}. Within GBoard, FL is used to train the Next Word Prediction (NWP) model that provides the suggested next words that appear above the keyboard while typing~\cite{hard2018federated}.
In this paper we show that FL is not private for Next Word Prediction. We present an attack that reconstructs the original training data, i.e. the text typed by a user, from the FL parameter updates with a high degree of fidelity. Both the FedSGD and FederatedAveraging variants of FL are susceptible to this attack. In fairness, Google have been aware of the possibility of information leakage from FL updates since the earliest days of FL, e.g. see footnote 1 in~\cite{mcmahan2017communication}. Our results demonstrate that not only does information leakage indeed happen for real-world models deployed in production and in widespread use, but that the amount of information leaked is enough to allow the local training data to be fully reconstructed.
We also show that adding Gaussian noise to the transmitted updates, which has been proposed to ensure local Differential Privacy (DP), provides little defence unless the noise levels used are so large that the utility of the model becomes substantially degraded. That is, DP is not an effective countermeasure to our attack\footnote{DP aims to protect the aggregate training data/model against query-based attacks, whereas our attack targets the individual updates. Nevertheless, we note that DP is sometimes suggested as a potential defence against the type of attack carried out here.}. We also show that use of mini-batches of up to 256 sentences provides little protection. Other defences, such as Secure Aggregation (a form of multi-party computation MPC), Homomorphic Encryption (HE), and Trusted Execution Environments (TEEs), are currently either impractical or require the client to trust that the server is honest\footnote{Google's Secure Aggregation approach~\cite{bonawitz2016practical} is a prominent example of an approach requiring trust in the server, or more specifically in the PKI infrastructure which in practice is operated by the same organisation that runs the FL server since it involves authentication/verification of clients. We note also that Secure Aggregation is not currently deployed in the GBoard app despite being proposed 6 years ago.} in which case use of FL is redundant.
Previous studies of reconstruction attacks against FL have mainly focussed on image reconstruction, rather than text as considered here. Unfortunately, we find that the methods developed for image reconstruction, which are based on used gradient descent to minimise the distance between the observed model data and the model data corresponding to a synthetic input, do not readily transfer over to text data. This is perhaps unsurprising since the inherently discrete nature of text makes the cost surface highly non-smooth and so gradient-based optimisation is difficult to apply successfully. In this paper we therefore propose new reconstruction approaches that are specialised to text data.
It is important to note that the transmission of user data to a remote server is not inherently a breach of privacy. The risk of a privacy breach is related to the nature of the data being sent, as well as whether it's owner can be readily identified. For example, sending device models, version numbers, and locale/region information is not an immediate concern but it seems clear that the sentences entered by users, e.g. when typing messages, writing notes and emails, web browsing and performing searches, may well be private. Indeed, it is not only the sentences typed which can be sensitive but also the set of words used (i.e. even without knowing the word ordering) since this can be used for targeting surveillance via keyword blacklists \cite{guardiannsasms}.
In addition, most Google telemetry is tagged with with an Android ID. Via other data collected by Google Play Services the Android ID is linked to (i) the handset hardware serial number, (ii) the SIM IMEI (which uniquely identifies the SIM slot) and (iii) the user's Google account \cite{infocomgaen21,securecom21}. When creating a Google account users are encouraged to supply a phone number and for many people this will be their own phone number. Use of Google services such as buying a paid app on the Google Play store or using Google Pay further links a person's Google account to their credit card/bank details. A user's Google account, and so the Android ID, can therefore commonly be expected to be linked to the person's real identity.
\section{Preliminaries} \subsection{Federated Learning Client Update} Algorithm \ref{alg:fedlearning} gives the procedure followed by FL participants to generate a model update. The number of local epochs $E$, mini-batch size $B$, and local client learning rate $\eta$ can be changed depending on the FL application. When $E = 1$ and the mini-batch size is equal the size of the training dataset, then it is called FedSGD, and any other configuration corresponds to FedAveraging, where multiple gradient descent steps occur on the client. \begin{algorithm}
\caption{Federated Learning Client Update}\label{alg:fedlearning}
\KwInput{$\theta_0$: The parameters of the global model, training loss function $\ell$.}
\KwOutput{$\theta_1$: The model parameters after training on the client's data.}
\KwProc{clientUpdate}{
$\theta_1 \gets \theta_0$\;
$\mathcal{B} \gets (\mbox{split dataset into batches of size } B)$\;
$\mbox{\textbf{for} local epoch \textit{i} from 1 to $E$ \textbf{do}}$\;
$\ \ \mbox{\textbf{for} batch \textit{b} $\in \mathcal{B}$ \textbf{do}}$\;
$\ \ \ \ \theta_1 \gets \theta_1 - \eta\nabla \ell(\theta_1;b)$\;
\Return $\theta_1$\;
} \end{algorithm}
\subsection{Threat Model} The threat model is that of a honest-but-curious adversary that has access to (i) the FL model achitecture, (ii) the current global FL model parameters $\theta_0$, and (iii) the FL model parameters, $\theta_1$, after updating locally using the training data of an individual user. The FL server has, for example, access to all of these and so this threat model captures the situation where there is an honest-but-curious FL server.
We do not consider membership attacks against the global model, although knowledge of $\theta_0$ allows such attacks, since they have already received much attention in the literature. Instead we focus on local reconstruction attacks i.e. attacks that aim to reconstruct the local training data of a user from knowledge of $\theta_0$ and $\theta_1$. In the GBoard Next Word Prediction task the local training data is the text typed by the user while using apps on their mobile handset e.g. while sending text messages.
\subsection{GBoard NWP Model} Figure \ref{fig:gboard_lstm} shows the LSTM recursive neural net (RNN) architecture used by by Gboard for NWP. This model was extracted from the app.
\begin{figure}
\caption{Schematic of LSTM architecture. LSTM layer takes as input dense vector \(x_t\) representing a typed word and outputs a dense vector \(h_t\). This output is then mapped to vector \(z_t\) of size 9502 (the size of the dictionary) with the value of each element being the raw logit for the corresponding dictionary word. A softmax layer then normalises the raw \(z_t\) values to give a vector \(\hat{y}_t\) of probabilities.}
\label{fig:gboard_lstm}
\end{figure}
The Gboard LSTM RNN is a word level language model, predicting the probability of the next word given what the user has already typed into the keyboard. Input words are first mapped to a dictionary entry, which has a vocabulary of \(V = 9502\) words, with a special \texttt{<UNK>} entry used for words that are not in the dictionary, and \texttt{<S>} to indicate the start of a sentence. The index of the dictionary entry is then mapped to a dense vector of size \(D=96\) using a lookup table (the dictionary entry is one-hot encoded and then multiplied by a \(\mathbb{R}^{D \times V}\) weighting matrix \(W^T\)) and applied as input to an LSTM layer with 670 units i.e. the state \(C_t\) is a vector of size 670. The LSTM layer uses a CIFG architecture without peephole connections, illustrated schematically in Figure \ref{fig:gboard_lstm}. The LSTM state \(C_t\) is linearly projected down to an output vector \(h_t\) of size \(D\), which is mapped to a raw logit vector \(z_t\) of size \(V\) via a weighting matrix \(W\) and bias \(b\). This extra linear projection is not part of the orthodox CIFG cell structure that was introduced in \cite{cifg}, and is included to accommodate the model's tied input and output embedding matrices \cite{press2017using}. A softmax output layer finally maps this to an \([0,1]^{V}\) vector \(\hat{y}_t\) of probabilities, the \(i\)'th element being the estimated probability that the next word is the \(i\)'th dictionary entry.
\section{Reconstruction Attack} \subsection{Word Recovery} \label{sec:wr}
In next word prediction the input to the RNN is echoed in it's output. That is, the output of the RNN aims to match the sequence of words typed by the user, albeit with a shift one word ahead. The sign of the output loss gradient directly reveals information about the words typed by the user, which can then be recovered easily by inspection. This key observation, first made in \cite{zhao2020idlg}, is the basis of our word recovery attack.
After the user has typed \(t\) words, the output of the LSTM model at timestep \(t\) is the next word prediction vector \(\hat{y}_{t}\), \begin{align*} \hat{y}_{t,i} = \frac{e^{z_i}}{\sum_{j=1}^Ve^{z_j}},\ i=1,\dots, V \end{align*} with raw logit vector \(z_t=Wh_t+b\), where \(h_t\) is the output of the LSTM layer. The cross-entropy loss function for text consisting of \(T\) words is \(J_{1:T}(\theta)= \sum_{t=1}^T J_t(\theta)\) where \begin{align*} J_t(\theta)= -\log \frac{e^{z_{i^*}(\theta)}}{\sum_{j=1}^Ve^{z_j(\theta)}}, \end{align*} and \(i^*_t\) is the dictionary index of the \(t\)'th word entered by the user and \(\theta\) is the vector of neural net parameters (including the elements of \(W\) and \(b\)). Differentiating with respect to the output bias parameters \(b\) we have that, \begin{align*} \frac{\partial J_{1:T}}{\partial b_{k}} = \sum_{t=1}^T \sum_{i=1}^V \frac{\partial J_t}{\partial z_{t,i}} \frac{\partial z_{t,i}}{\partial b_k} \end{align*} where \begin{align*} \frac{\partial J_t}{\partial z_{t,i_t^*}}=\frac{e^{z_{i^*}}}{\sum_{j=1}^Ve^{z_j}}-1<0, \\ \quad \frac{\partial J_t}{\partial z_{t,i}}=\frac{e^{z_{i}}}{\sum_{j=1}^Ve^{z_j}}>0,\ i\ne i_t^* \end{align*} and \begin{align*} \frac{\partial z_{t,i}}{\partial b_k} = \begin{cases} 1 & k=i\\ 0 & \text{otherwise} \end{cases} \end{align*} That is, \begin{align*} \frac{\partial J_{1:T}}{\partial b_{k}} = \sum_{t=1}^T \frac{\partial J_t}{\partial z_{t,k}} \end{align*} It follows that for words \(k\) which do not appear in the text \(\frac{\partial J_{1:T}}{\partial b_{k}} >0\). Also, assuming that the neural net has been trained to have reasonable performance then \(e^{z_k}\) will tend to be small for words \(k\) that do not appear next and large for words which do. Therefore for words \(i^*\) that appear in the text we expect that \(\frac{\partial J_{1:T}}{\partial b_{i^*}} <0\).
The above analysis focuses mainly on the bias parameters of the final fully connected layer, however similar methods can be applied to the \(W\) parameters. The key aspect here that lends to the ease of this attack is that the outputs echo the inputs, unlike for example, the task of object detection in images. In that case, the output is just the object label.
This observation is intuitive from a loss function minimisation perspective. Typically the estimated probability \(\hat{y}_{i^*}\) for an input word will be less than 1. Increasing \(\hat{y}_{i^*}\) will therefore decrease the loss function i.e. the gradient is negative. Conversely, the estimated probability \(\hat{y}_{i}\) for a word that does not appear in the input will be small but greater than 0. Decreasing \(\hat{y}_{i}\) will therefore decrease the loss function i.e. the gradient is positive.
\emph{Example:} To execute this attack in practice, simply subtract the final layer parameters of the current global model $\theta_0$ from those of the resulting model trained on the client's local data, $\theta_1$, as shown in Algorithm \ref{alg:word-extraction}. The indices of the negative values reveal the typed words. Suppose the client's local data consists of just the one sentence ``learning online is not so private''. We then train model $\theta_0$ on this sentence for 1 epoch, with a mini-batch size of 1, and SGD learning rate of 0.001 (FedSGD), and report the the values at the negative indices in Table \ref{tab:vals}.
\begin{table}
\centering
\begin{tabular}{ |c|c|c| }
\hline
word & $i$ & $(\theta_1 - \theta_0)_i $ \\
\hline
learning & 7437 & -0.0009951561 \\
online & 4904 & -0.0009941629 \\
is & 209 & -0.000997875 \\
not & 1808 & -0.0009941144 \\
so & 26 & -0.0009965639 \\
private & 6314 & -0.0009951561 \\
\hline
\end{tabular}
\caption{Values of the final layer parameter difference at the
indices of the typed words. Produced after training the model on
the sentence ``learning online is not so private'',
$E = 1, B = 1, \eta = 0.001$. These are the only indices where
negative values occur.\label{tab:vals}} \end{table}
\begin{algorithm}
\caption{Word Recovery}\label{alg:word-extraction}
\KwInput{$\theta_0$: The global model's final layer parameters,
$\theta_1$: The final layer parameters of a model update}
\KwOutput{User typed tokens $w$}
\KwProc{recoverWords}{
$d \gets \theta_1 - \theta_0$\; $w \gets \{ i \ | \ d_i < 0 \}$\;
\Return $w$\;
} \end{algorithm}
\subsection{Reconstructing Sentences} \label{sec:rs}
The attack described previously retrieves the words typed, but gives no indication of the order in which they occured. To reveal this order, we ask the model\footnote{It is perhaps worth noting that we
studied a variety of reconstruction attacks, e.g, using Monte Carlo
Tree Search to perform a smart search over all words sequences, but
found the attack method described here to be simple, efficient and
highly effective}.
The basic idea is that after running multiple rounds of gradient descent on the local training data, the local model is ``tuned'' to the local data in the sense that when it is presented with the first words of a sentence from the local training data, the model's next word prediction will tend to match the training data and so we can bootstrap reconstruction of the full training data text.
In more detail, the $t$'th input word is represented by a vector $x_t\in{0,1}^V$, with all elements zero apart from the element corresponding to the index of the word in the dictionary. The output $y_{t+1}\in[0,1]^V$ from the model after seeing input words $x_0,\dots,x_t$ is a probability distribution over the dictionary. We begin by selecting $x_0$ equal to the start of sentence token \texttt{<S>} and $x_1$ equal to the first word from our set of reconstructed words, then ask the model to generate
$y_2=Pr(x_{2} | x_0,x_1; \theta_1)$. We set all elements of $y_2$ that are not in the set of reconstructed words to zero, since we know that these were not part of the local training data, renormalise $y_2$ so that its elements sum to one, and then select the most likely next word as $x_2$. We now repeat this process for
$y_3=Pr(x_{3} | x_0,x_1,x_2; \theta_1)$, and so on, until a complete sentence has been generated. We then take the second word from our set of reconstructed words as $x_1$ and repeat to generate a second sentence, and so on.
This method generates as many sentences as there are extracted words. This results in a lot more sentences than were originally in the client's training dataset. In order to filter out the unnecessary sentences, we rank each generated sentence by its change in perplexity, from the initial global model $\theta_0$ to the new update $\theta_1$.
The Log-Perplexity of a sequence $x_0,...,x_t$, is defined as \begin{equation*}
PP_\theta(x_0,...,x_t) = \sum_{i=1}^{t}(- \log Pr(x_i | x_0,...,x_{i-1}; \theta)), \end{equation*}
and quantifies how `surprised' the model is by the sequence. Those sentences that report a high perplexity for $\theta_0$ but a comparatively lower one for $\theta_1$ reveal themselves as having been part of the dataset used to train $\theta_1$. Each generated sentence is scored by their percentage change in perplexity:
\begin{equation*}
Score(x_0,...,x_t) = \frac{PP_{\theta_0}(x_0,...,x_t) - PP_{\theta_1}(x_0,...,x_t)}{PP_{\theta_0}(x_0,...,x_t)}. \end{equation*} By selecting the top-$n$ ranked sentences, we select those most likely to have been present in the training dataset.
\begin{figure*}
\caption{\small Word recovery performance over time with full batch
and mini-batch training. The upper bound on the F1 score for the different
datasets is related to how many words in the training set are also
in the model dictionary. Figure \ref{fig:fbepoch} shows the F1
score over time with $B = n_k$ full batch training. At epoch 1
(FedSGD), the F1 is high and stays constant as you train for
longer (FedAveraging). Using a mini-batch $B = 32$ (Figure
\ref{fig:mbepoch}) has no effect on the attack (in the case where
$n_k = 16, B = 16$).}
\label{fig:wr_nonoise}
\end{figure*}
\section{Performance Of Attacks Against Vanilla Federated Learning} \label{sec:results_vanilla} \subsection{Experimental Setup} We make use of the LSTM RNN extracted from Gboard as the basis of our experiments. The value of it's extracted parameters are used as the initial 'global' model $\theta_0$; the starting point of the updates we generate. There are several variables that go into producing an update: the number of sentences in the dataset, $n_k$, the number of epochs $E$, the batch size $B$, and the local learning rate $\eta$. Note that when $E = 1$ and $B = n_k$, this corresponds to a FedSGD update, and that any other configuration corresponds to a FedAveraging update. Unless explicitly mentioned otherwise, we keep the client learning rate $\eta = 0.001$ constant for all our experiments. All sample datasets used consist of 4 word long sentences, mirroring the average length of sentences that the Gboard model was trained with \cite{hard2018federated}.
To evaluate the effectiveness of our attack, the sample datasets we use are taken from a corpus of american english text messages \cite{oday2013text}, which includes short sentences similar to those the Gboard LSTM extracted from the mobile application was trained on. We perform our two attacks on datasets consisting of $n_k = 16, 32, 64, 128,$ and $256$ sentences. Converting a sentence into a sequence of training samples and labels $(\textbf{x}, y)$ is done as follows: \begin{itemize} \item The start-of-sentence token \texttt{<S>} is prepended to the
beginning of the sentence, and each word is then converted to it's
corresponding word embedding. This gives a sequence $x_0,x_1,\dots,x_T$ of tokens where $x_0$ is the \texttt{<S>} token.
\item A sentence of length $T$ becomes $T$ training points $((x_0,x_1),y_2)$, $((x_0,x_1,x_2),y_3), \dots, ((x_0,x_1,\dots,x_{T-1}),y_T)$ where label $y_t\in [0,1]^V$ is a probability distribution over the dictionary entries, with all elements zero apart from the element corresponding to the dictionary index of $x_t$
\end{itemize}
Following \cite{hard2018federated} we use categorical cross entropy loss over the output and target labels. After creating the training samples and labels from a dataset of $n_k$ sentences, we train model $\theta_0$ on this training data for a specified number of epochs $E$ with a mini-batch size of $B$, according to Algorithm \ref{alg:fedlearning} to produce the local update $\theta_1$. We then subtract the final layer parameters of the two models to recover the words, and iteratively sample $\theta_1$ according to the methodology described in Section \ref{sec:rs} to reconstruct the sentences, and take the top-$n_k$ ranked sentences by their perplexity score.
\subsection{Metrics} To evaluate performance, we use the F1 score which balances the precision and recall of word recovery with our attack. We also use a modified version of the Levenshtein ratio i.e. the normalised Levenshtein distance \cite{marzal1993computation} (the minimum number of word level edits needed to make one string match another) to evaluate our sentence reconstruction attack. Ranging from 0 to 100, the larger the levenshtein ratio, the closer the match between our reconstructed and the ground truth sentence.
\subsection{Measurements}
\subsubsection{Word Recovery Performance} Figure \ref{fig:wr_nonoise} shows the measured performance of our word recovery attack for both the FedSGD and FedAveraging variants of FL for mini-batch/full batch training and as the dataset size and training time are varied. It can be seen that none of these variables have much of an effect on the F1 score achieved by our attack, which remains high across a wide range of conditions. Note that the maximum value of the F1 score is not one for this data but instead a smaller value related to how many of the words in the dataset are also present in the model's vocabulary. Some words, e.g unique nouns, slang, etc, do not exist in the model's 9502 word dictionary, and our word recovery attack can only extract the \texttt{<UNK>} token in their place, limiting how many words we can actually recover.
\begin{figure*}
\caption{$E = 50$ epochs}
\label{fig:sre50}
\caption{$E = 100$ epochs}
\label{fig:sre100}
\caption{$E = 1000$ epochs}
\label{fig:sre1000}
\caption{\small Sentence reconstruction performance. Each point
corresponds to a different dataset colour coded by it's size. The
y-axis gives the average Levenshtein ratio of the reconstructed
sentences. The x-axis is the F1 score between the tokens used in
the reconstructed sentences and the ground truth. The closer a
point is to the top-right corner, the closer the reconstruction is
to perfect.}
\label{fig:fedavg_sr_nonoise}
\end{figure*}
\subsubsection{Sentence Reconstruction Peformance} Figure \ref{fig:fedavg_sr_nonoise} shows the measured performance of our sentence reconstruction attack. Figures \ref{fig:sre50}, \ref{fig:sre100}, and \ref{fig:sre1000} show that as you train for more epochs (50, 100, and 1000 respectively) the quality of the reconstructed sentences improves. This is intuitive as the models trained for longer are more overfit to the data, and so the iterative sampling approach is more likely to return the correct next word given a conditioning prefix.
However, longer training times are not necessary to accurately reconstruct sentences. It can be seen from Figure \ref{fig:fedsgd_sr_nonoise}(b) that even in the FedSGD setting, where the number of epochs $E = 1$, we can sometimes still get high quality sentence reconstructions by modifying the model parameters $\theta_1$ to be $\theta_1 + s(\theta_1 - \theta_0)$, with $s$ being a scaling factor. Since $\theta_1 = \theta_0 -\eta \nabla \ell(\theta_0;b)$ with FedSGD, $$\theta_1 + s(\theta_1 - \theta_0) = \theta_0 -\eta(1+s) \nabla \ell(\theta_0;b)$$ from which it can be seen that scaling factor $s$ effectively increases the gradient descent step size.
\begin{figure*}
\caption{\small FedSGD sentence reconstruction performance without (a) and with (b) scaling.}
\label{fig:fedsgd_sr_nonoise}
\end{figure*}
\section{Existing Attacks And Their Defences} \subsection{Image Data Reconstruction} Information leakage from the gradients of neural networks used for object detection in images appears to have been initially investigated in~\cite{zhu2019dlg}, which proposed the Deep Leakage from Gradients (DLG) algorithm. An image is input to a neural netl and the output is a label specifying an object detected in the image. In DLG a synthetic input is applied to the neural net and the gradients of the model parameters are calculated. These gradients are then compared to the observed model gradients sent to the FL server and gradient descent is used to update the synthetic input so as to minimise the difference between its model gradients and the observed model gradients.
This work was subsequently extended by~\cite{zhao2020idlg,geiping_inverting_2020-1,wang2020sapag,zhu2020rgap,yin2021see,jin2021catastrophic} to improve the stability and performance of the original DLG algorithm, as well as the fidelity of the images it generates. In~\cite{yin2021see,jin2021catastrophic}, changes to the optimisation terms allowed for successful data reconstruction at batch sizes of up to 48 and 100 respectively. Analytical techniques of data extraction~\cite{boenisch2021curious,pan2020theory} benefit from not being as costly to compute as compared to optimization based methods. Additionally, these analytical attacks extract the exact ground truth data, as compared to DLG and others who often settle to image reconstructions that include artefacts.
\subsection{Text Data Reconstruction}
Most work on reconstruction attacks has focussed on images and there is relatively little work on text reconstruction. A single text reconstruction example is presented in~\cite{zhu2019dlg}, with no performance results. Probably the closest work to the present paper is~\cite{deng2021tag} which applies a variant of DLG to text reconstruction from gradients of transformer-based models (variants of BERT~\cite{vaswani2017attention}). As already noted, DLG tends to perform poorly with text data and the word recovery rate achieved. in~\cite{deng2021tag} is generally no more than 50\%.
In our attack context, DLG can recover words but at much smaller scales than we have demonstrated, and takes longer to find these words. There is also no guarantee that DLG can recover words and place them in the correct order. In \cite{zhu2019dlg}, it is noted that the algorithm requires multiple restarts before successful reconstruction. Additionally, DLG operates by matching the single gradient of a batch of training data, therefore it only works in the FedSGD setting, where $E = 1$. We show the results of DLG in Listing \ref{lst:sampledlg} on gradients of $B = 1, \mbox{ and } 2$ 4 word sentences. In the first example, it took DLG 1000 iterations to produce ``\texttt{<S> how are venue}'', and 1500 iterations to produce ``\texttt{<S> how are sure cow}, \texttt{<S> haha where are Tell
van}''. These reconstructions include some of the original words, but recovery is not as precise as our attack, and takes orders of magnitude longer to carry out.
\begin{lstlisting}[frame=single,caption=Original and reconstructed sentences by DLG,label=lst:sampledlg]
<S> how are you
<S> how are venue
<S> how are you doing
<S> how are sure cow
<S> where are you going
<S> haha where are Tell van \end{lstlisting}
Work has also been carried out on membership attacks against text data models such as GPT2 i.e. given a trained model the attack seeks to infer one or more training data points. See for example~\cite{carlini2019secretsharer,carlini2021extracting}. But, as already noted, such attacks are not the focus of the present paper.
\subsection{Proposed Defences} Several defences have been proposed to prevent data leakage in FL.
Abadi et al. \cite{abadi2016deep} proposed Differentially Private Stochastic Gradient Descent (DP-SGD), which clips stochastic gradient decent updates and adds Gaussian noise at each iteration. This aims to defend against membership attacks against neural networks, rather than the reconstruction attacks that we consider here. In~\cite{brendan2018learning} it was appled to train a next word prediction RNN motivated by mobile keyboard applications, again with a focus on membership attacks. Recently, the same team at Google proposed DP-FTRL~\cite{kairouz2021practical} which avoids the sampling step in DP-SGD.
Secure Aggregation is a multi-party protocol proposed in 2016 by~\cite{bonawitz2016practical} as a defence against data leakage from the data uploaded by clients to an FL server. In this setting the central server only has access to the sum of, and not individual updates. However, this approach still requires clients to trust that the PKI infrastructure is honest since dishonest PKI infrastructure allows the server to perform a sybil attack (see Section 6.2 in~\cite{bonawitz2016practical}) to reveal the data sent by an individual client. When both the FL server and the PKI infrastructure are operated by Google then Secure Aggregation requires users to trust Google servers to be honest, and so offers from an attack capability point of view offers no security benefit. Recent work by Pasquini et al. \cite{pasquini2021eluding} has also shown that by distributing different models to each client a dishonest server can recover individual model updates. As a mitigation they propose adding local noise to client updates to obtain a form of local differential privacy. We note that despite the early deployment of FL in production systems such as GBoard, to the best our knowledge, there does not exist a real-world deployment of secure aggregation. This is also true for homomorphically encrypted FL, and FL using Trusted Execution Environments (TEEs).
\section{Performance Of Our Attacks Against Federated Learning with Local DP} Typically, when differential privacy is used with FL noise is added by the server to the aggregate update from multiple clients i.e. no noise is added to the update before leaving a device. This corresponds to the situation considered in Section \ref{sec:results_vanilla} . In this section we now evaluate how local differential privacy, that is, noise added either during local training (DPSGD) or to the final model parameters $\theta_1$, before its transmission to the coordinating FL server, affect the performance of both our word recovery and sentence reconstruction attacks. \begin{algorithm}
\caption{Local DPSGD}\label{alg:localdpsgd}
\KwProc{clientUpdateDPSGD}{
$\theta_1 \gets \theta_0$\;
$\mathcal{B} \gets (\mbox{split dataset into batches of size } B)$\;
$\mbox{\textbf{for} local epoch \textit{i} from 1 to $E$ \textbf{do}}$\;
$\ \ \mbox{\textbf{for} batch \textit{b} $\in \mathcal{B}$ \textbf{do}}$\;
$\ \ \ \ \theta_1 \gets \theta_1 - \eta\nabla \ell(\theta_1;b) + \eta\mathcal{N}(0, \sigma)$\;
\Return $\theta_1$\;
} \end{algorithm}
Algorithm \ref{alg:localdpsgd} outlines the procedure for DPSGD-like local training, where Gaussian noise of mean $0$ and standard deviation $\sigma$ is added along with each gradient update. Algorithm \ref{alg:singlenoise} details the typical FL client update procedure but adds Gaussian noise to the final model $\theta_1$ before it is returned to the server. In our experiments, everything else as described in Section \ref{sec:results_vanilla} is kept the same.
\begin{algorithm}
\caption{Local Single Noise Addition}\label{alg:singlenoise}
\KwProc{clientUpdateSingleNoise}{
$\theta_1 \gets \theta_0$\;
$\mathcal{B} \gets (\mbox{split dataset into batches of size } B)$\;
$\mbox{\textbf{for} local epoch \textit{i} from 1 to $E$ \textbf{do}}$\;
$\ \ \mbox{\textbf{for} batch \textit{b} $\in \mathcal{B}$ \textbf{do}}$\;
$\ \ \ \ \theta_1 \gets \theta_1 - \eta\nabla \ell(\theta_1;b)$\;
\Return $\theta_1 + \mathcal{N}(0, \sigma)$\;
} \end{algorithm}
\begin{figure*}
\caption{\small Word recovery behaviour when Gaussian noise is all to local FL updates: (a) vanilla word recovery performance, (b) disparity of
magnitudes between those words that were present in the dataset
and those 'noisily' flipped negative, (c) word recovery performance when filtering is used.}
\label{fig:removing-noisy}
\end{figure*} \begin{figure*}
\caption{\small Word recovery results for two local DP
methods. Figure \ref{fig:fedavgsinglenoise} shows how added noise
to the final model parameters affects the attack for different
training times. With DPSGD-like training, noise levels of up to
$\sigma = 0.1$ are manageable with our magnitude threshold trick,
resulting in F1 scores close to those had we not added any noise
at all. With FedSGD updates then for noise of up to $\sigma = 0.0001$
added to the final parameters, we can still recover words to a
high degree of precision and recall. }
\label{fig:noiseresults}
\end{figure*}
\subsection{Word Recovery Performance}
Figure \ref{fig:nocutoff} shows the performance of our word recovery attack against DPSGD-like local training for different levels of $\sigma$. For noise levels of $\sigma = 0.001$ or greater, it can be seen that the F1 score drops significantly. What is happening is that the added noise introduces more negative values in the difference of the final layer parameters and so results in our attack extracting more words than actually occurred in the dataset, destroying it's precision. However, one can eliminate most of these ``noisily'' added words by simple inspection. Figure \ref{fig:mags1} graphs the sorted magnitudes of the negative values in the difference between the final layer parameters of $\theta_1$ and $\theta_0$, after DPSGD-like training with $B = 32, n_k = 256, \mbox{ and } \sigma = 0.001$. We can see that the more epochs the model is trained for, the more words are extracted via our attack. Of the around 600 words extracted after 1000 epochs of training, only about 300 of them were actually present in the dataset. On this graph, those words with higher magnitudes correspond to the ground truth words. It can be seen that we therefore can simply cutoff any words extracted beyond a specified magnitude threshold. This drastically improves the performance of the attack, see Figure \ref{fig:cutoff}. It can be seen that even for $\sigma = 0.1$, we now get word recovery results close to those obtained when we added no noise at all.
Figure \ref{fig:fedavgsinglenoise} shows the performance of our word recovery attack when noise is added to the local model parameters $\theta_1$ in the FedAveraging setting, with $n_k = 256, \mbox{ and } B = 32$. When $\sigma \ge 0.01$, it can be seen that the performance drops drastically\footnote{Note that in DPSGD the added noise is multiplied by the learning rate $\eta$, and so this factor needs to be taken into account when comparing the $\sigma$ values used in DPSGD above and with single noise addition. This means added noise with standard deviation $\sigma$ for DPSGD corresponds roughly to a standard deviation of $\eta \sqrt{EB}\sigma$ with single noise addition. For $\eta=0.001$, $E=1000$, $B=32$, $\sigma=0.1$ the corresponding single noise addition standard deviation is 0.018.}, even when we use the magnitude threshold trick described previously. With FedSGD (Figures \ref{fig:fedsgddpsgd} and \ref{fig:fedsgdsinglenoise}), we see that with DPSGD-like training, these levels of noise are manageable, but for single noise addition, there are only so many words that are recoverable before being lost in the comparatively large amounts of noise added.
For comparison, in the FL literature on differential privacy, the addition of Gaussian noise with standard deviation no more than around 0.001 (and often much less) is typically considered, and is only added after the update has been transmitted to the coordinating FL server.
\subsection{Sentence Reconstruction Performance} Figure \ref{fig:sr_localdp} shows the measured sentence reconstruction performance with both DPSGD-like training (Figures \ref{fig:levendpsgd0_01} and \ref{fig:levendpsgd0_1}) and when noise is added to the final parameters of the model (Figures \ref{fig:sn0_01} and \ref{fig:sn0_1}). By removing the noisily added words and running our sentence reconstuction attack, we get results close to those had we not added any noise for up to $\sigma = 0.1$. For the single noise addition method, as these levels of noise are not calibrated, $\sigma = 0.1$ is enough to destroy the quality of reconstructions, however these levels of noise also destroy any model utility.
\begin{figure*}\label{fig:levendpsgd0_01}
\label{fig:levendpsgd0_1}
\label{fig:sn0_01}
\label{fig:sn0_1}
\label{fig:sr_localdp}
\end{figure*}
\section{Additional Material} The code for all of the attacks here, the LSTM model and the datasets used are all publicly available on github \href{https://github.com/namilus/nwp-fedlearning}{\texttt{here}}.
\section{Summary And Conclusions} In this paper we introduce two local reconstruction attacks against federated learning when used to train natural language text models. We find that previously proposed attacks (DLG and its variants) targeting image data are ineffective for text data and so new methods of attack tailored to text data are necessary. Our attacks are simple to carry out, efficient, and highly effective. We illustrate their effectiveness against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app (with $>5$ Billion downloads) that has been an early adopter of federated learning for production use. We demonstrate that the words a user types on their mobile handset, e.g. when sending text messages, can be recovered with high accuracy under a wide range of conditions and that counter-measures such a use of mini-batches and adding local noise are ineffective. We also show that the word order (and so the actual sentences typed) can be reconstructed with high fidelity. This raises obvious privacy concerns, particularly since GBoard is in production use.
Secure multi-party computation methods such as Secure Aggregation and also methods such as Homomorphic Encryption and Trusted Execution Environments are potential defences that can improve privacy, but these can be difficult to implement in practice. Secure Aggregation requires users to trust that the server is honest, despite the fact that FL aims to avoid the need for such trust. Homomorphic Encryption implementations that are sufficiently efficient to allow large-scale production use are currently lacking.
On a more positive note, the privacy situation may not be quite as bad as it seems given these reconstruction attacks. Firstly, it is not the raw sentences typed by a user that are reconstructed in our attacks but rather the sentences after they have been mapped to tokens in a text model dictionary. Words which are not in the dictionary are mapped to a special \texttt{<UNK>} token. This means that the reconstructed text is effectively redacted, with words not in the dictionary having been masked out. This suggests that a fruitful direction for future privacy research on FL for natural language models may well lie in taking a closer look at the specification of the dictionary used. Secondly, we also note that changing from a word-based text model to a character-based one would likely make our attacks much harder to perform.
\appendices
\end{document} | arXiv |
Contemporary Biostatistics with Biopharmaceutical Applications
Contemporary Biostatistics with Biopharmaceutical Applications pp 61-90 | Cite as
A Powerful Retrospective Multiple Variant Association Test for Quantitative Traits by Borrowing Strength from Complex Genotypic Correlations
Xiaowei Wu
Part of the ICSA Book Series in Statistics book series (ICSABSS)
High-throughput sequencing has often been used in pedigree-based studies to identify genetic risk factors associated with complex traits. The genotype data in such studies exhibit complex correlations attributed to both familial relation and linkage disequilibrium. Accounting for these genotypic correlations can improve power for assessing the contribution of multiple genomic loci. However, due to model restrictions, existing multiple variant association testing methods cannot make efficient use of the correlation information appropriately. Recognizing this limitation, we develop PC-ABT, a novel principal-component-based adaptive-weight burden test for gene-based association mapping of quantitative traits. This method uses a retrospective score test to incorporate genotypic correlations, and employs "data-driven" weights to obtain maximized test statistic. In addition, by adjusting the number of principal components that essentially reveals the effective number of tests in the target gene region, PC-ABT is able to reduce the degree of freedom of the null distribution to improve power. Simulation studies show that PC-ABT is generally more powerful than other multiple variant tests that allow related individuals. We illustrate the application of PC-ABT by a gene-based association analysis of systolic blood pressure using data from the NHLBI "Grand Opportunity" Exome Sequencing Project.
This research was funded by 4-VA, a collaborative partnership for advancing the Commonwealth of Virginia.
Appendix 1: Description of MASTOR and Theoretical Justification of the Null Distribution of SABT
MASTOR (Jakobsdottir and McPeek 2013) is a retrospective, quasi-likelihood score test for testing single-variant association with a quantitative trait in samples with related individuals. Considering a biallelic genetic variant X of interest (an example in the general setting described in Sect. 4.2.1 is to let X = Gj, 1 ≤ j ≤ m), the MASTOR statistic (for complete data) takes the form
$$\displaystyle \begin{aligned} S_{MAS}=\frac{(\boldsymbol{V}^T\boldsymbol{X})^2}{(\boldsymbol{V}^T\boldsymbol{\varPhi}\boldsymbol{V})\widehat{\sigma}_X^2}. \end{aligned}$$
In this expression, \(\boldsymbol {V}=\widehat {\boldsymbol {\varSigma }}_0^{-1}(\boldsymbol {Y}-\boldsymbol {Z}\widehat {\boldsymbol {\beta }}_0)\) is the transformed phenotypic residual obtained from the null model Y = Zβ0 + 𝜖, 𝜖 ∼ N(0, Σ0), where β0 represents the coefficient of regressing quantitative trait Y on non-genetic covariates Z, and Σ0 is the trait covariance matrix under the null, usually with a variance component form \(\sigma _e^2\boldsymbol {I}+\sigma _a^2\boldsymbol {\varPhi }\). The variance of variant X is denoted by \(\sigma _X^2\). When Hardy-Weinberg equilibrium is assumed for this variant, \(\sigma _X^2\) can be estimated by \(\widehat {\sigma }_X^2=\widehat {p}(1-\widehat {p})/2\), where \(\widehat {p}=(\boldsymbol {1}^T\boldsymbol {\varPhi }^{-1}\boldsymbol {1})^{-1}\boldsymbol {1}^T\boldsymbol {\varPhi }^{-1}\boldsymbol {X}\) is the best linear unbiased estimator (McPeek et al. 2004) of the allele frequency p of X, and 1 denotes a vector with every element equal to 1.
Now in Sect. 4.2.2, we have obtained the ABT statistic
$$\displaystyle \begin{aligned} S_{ABT}=\frac{\boldsymbol{V}^T\boldsymbol{G}(\widehat{\boldsymbol{D}}\boldsymbol{R}\widehat{\boldsymbol{D}})^{-1}\boldsymbol{G}^T\boldsymbol{V}}{\boldsymbol{V}^T\boldsymbol{\varPhi}\boldsymbol{V}}. \end{aligned}$$
Let \(\widetilde {\boldsymbol {G}}=\boldsymbol {G}(\widehat {\boldsymbol {D}}\boldsymbol {R}\widehat {\boldsymbol {D}})^{-1/2}\) be a decorrelated version of the genotype matrix in which the across-column covariance has been transformed to identity, and let \(\widetilde {\boldsymbol {G}}_j\) be the jth column of \(\widetilde {\boldsymbol {G}}\). By linear algebra,
$$\displaystyle \begin{aligned} S_{ABT}=\sum_{j=1}^m\frac{\left(\boldsymbol{V}^T\widetilde{\boldsymbol{G}}_j\right)^2}{\boldsymbol{V}^T\boldsymbol{\varPhi}\boldsymbol{V}}. \end{aligned}$$
This is essentially the summation of m independent MASTOR statistics (in observing the uncorrelatedness and joint normality of \(\boldsymbol {V}^T\widetilde {\boldsymbol {G}}_j\)), each formulated from a transformed variant \(\widetilde {\boldsymbol {G}}_j\) (note the variance estimate is 1 after transformation). Hence SABT follows \(\chi _m^2\) distribution under the null hypothesis.
Appendix 2: Additional Simulation Results Show That the Data-Driven Weights W∗ Is Adaptive to the Direction of True Genetic Effects
In order to understand how the data-driven weights W∗ (defined in Eq. (4.5) of the main text) help gain power in association testing, we compare the signs of W∗ to those of the genetic effects γ using the simulated data sets in the power analysis. Figure 4.5, Panels a–d, present boxplots of the weights W∗ based on 5000 simulated data replicates in Scenario S2 with genetic effect Setting III, for LD Configurations C1–C4, respectively. We note that, in this setting, the first 30% components of γ are set to be positive, the next 30% are negative, and the remaining 40% are zeros. The boxplots clearly demonstrates that on average, the weights W∗ is able to track the direction of true genetic effects, thus result in stronger association on the weighted sum genetic score.
Fig. 4.5
Boxplot of W∗ based on 5000 simulated data replicates in Scenario S2 with genetic effect Setting III. The adaptive weights of risk, protective, and neutral variants are marked with red, green, and white color, respectively. Panel a: Configuration C1; Panel b: Configuration C2; Panel c: Configuration C3; Panel d: Configuration C4
Appendix 3: Additional Simulation Results Show the Relation Between the ABT Statistic and the famSKAT Statistic
We show in Fig. 4.6, Panels a–d, the scatter plots of the numerator of the ABT statistic vs. the famSKAT statistic based on 5000 simulated data replicates in Scenario S3 with genetic effect Setting II, for LD Configurations C1–C4, respectively. We observe that, when the LD correlation is negligible (Panel a), the numerator of the ABT statistic behaves similarly as the famSKAT statistic because in Eq. (4.6) of the main text, \((\widehat {\boldsymbol {D}}\boldsymbol {R}\widehat {\boldsymbol {D}})^{-1}\) is equivalent to the Madsen-Browning weights used in calculating the famSKAT statistic. As the LD correlation increases (Panels b, c, and d), the two statistics become less and less consistent because in calculating the famSKAT statistic, the Madsen-Browning weights only depend on individual variants, whereas in calculating the ABT statistic, the weight of an individual variant statistic is also affected by other variants on linked sites, as seen from the weight matrix \((\widehat {\boldsymbol {D}}\boldsymbol {R}\widehat {\boldsymbol {D}})^{-1}\) in Eq. (4.6) of the main text.
Comparison between SfamSKAT and the numerator of SABT based on 5000 simulated data replicates in Scenario S3 with genetic effect Setting II. Panel a: Configuration C1; Panel b: Configuration C2; Panel c: Configuration C3; Panel d: Configuration C4
Appendix 4: Additional Simulation Results to Validate the Asymptotic Null Distribution of SPC-ABT via Permutation Based Approach
We perform 1000 permutations to the simulated data under Scenario S1 (unrelated individuals and common variants) and configuration C3 (strong LD with η = 0.7). Figure 4.7 shows the asymptotic null distributions of SPC-ABT for the number of principal components q = 1, 25, and 50, together with the corresponding empirical CDFs obtained via permutation. Note that two different asymptotic distributions are shown in this figure, one is \(\chi _q^2\), the other is a mixture of \(\chi _1^2\) distribution, obtained by applying adaptive weights W# in the famSKAT method. In Fig. 4.8, panels a, b, and c, we compare in log scale the empirical p-values via permutation based approach against the p-values from the asymptotic distribution (mixture of \(\chi _1^2\)) for the number of principal components q = 1, 25, and 50, respectively. Panel d of Fig. 4.8 further reports the correlation between −log10(empirical p-values via permutation) and −log10(p-values based on the asymptotic distribution) for the number of principal components q = 1, 2, ⋯ , 50.
Asymptotic null distribution and permutation based empirical distribution of SPC-ABT (for the number of principal components q = 1, 25, and 50) under Scenario S1 and LD configuration C3
Comparing empirical p-values via permutation and p-values based on the asymptotic distribution. Panel a: scatter plot in log scale for the number of principal components q = 1; Panel b: for q = 25; Panel c: for q = 50; Panel d: correlation between −log10(empirical p-values via permutation) and −log10(p-values based on the asymptotic distribution) for q = 1, 2, ⋯ , 50
Appendix 5: Additional Simulation Results for Type I Error Evaluation
We provide additional simulation results for type I error evaluation. Table 4.4 lists the empirical type I error rates of five testing methods: FBT, famSKAT, ABT, MONSTER, and PC-ABT for the combinations of four scenarios (S1, S2, S3, and S4) and four LD configurations (C1, C2, C3, and C4), based on 20,000 simulated data replicates. Figures 4.9, 4.10, 4.11, and 4.12 show the Q-Q plots of the PC-ABT p-values under the null hypothesis for Scenarios S1, S2, S3, and S4, respectively. The number of principal components is chosen to guarantee that the total percent variance explained (PVE) >90%.
Q-Q plots of the PC-ABT p-values under the null hypothesis for Scenario S1 based on 20,000 simulated data replicates. In each simulation, the number of principal components is chosen such that PVE >90%
Fig. 4.10
Table 4.4
Empirical type I error of five testing methods
# of PCs
α = 0.001
α = 0.01
famSKAT
PC-ABT∗
The type I error rate estimates are calculated as the proportion of p-values smaller than nominal under the null hypothesis based on 20,000 simulated data replicates. PC-ABT∗: the number of principal components is chosen to guarantee that the total percent variance explained (PVE) >90%
Asimit, J., Zeggini, E.: Rare variant association analysis methods for complex traits. Ann. Rev. Genet. 44, 293–308 (2010)Google Scholar
Berthelot, C.C., et al.: Changes in PTGS1 and ALOX12 gene expression in peripheral blood mononuclear cells are associated with changes in arachidonic acid, oxylipins, and oxylipin/fatty acid ratios in response to Omega-3 fatty acid supplementation. PLoS One 10(12), e0144,996 (2015)Google Scholar
Chen, H., Meigs, J.B., Dupuis, J.: Sequence kernel association test for quantitative traits in family samples. Genet. Epidemiol. 37(2), 196–204 (2013)Google Scholar
Cui, J.S., Hopper, J.L., Harrap, S.B.: Antihypertensive treatments obscure familial contributions to blood pressure variation. Hypertension 41(2), 207–210 (2003)Google Scholar
Derkach, A., Lawless, J.F., Sun, L.: Assessment of pooled association tests for rare variants within a unified framework. Stat. Sci. 29(2), 302–321 (2013)zbMATHGoogle Scholar
Fang, S., Zhang, S., Sha, Q.: Detecting association of rare variants by testing an optimally weighted combination of variants for quantitative traits in general families. Ann. Hum. Genet. 77(6), 524–534 (2014)Google Scholar
Fuentes, M.: Testing for separability of spatial-temporal covariance functions. J. Stat. Plan. Inference. 136, 447–466 (2006)MathSciNetzbMATHGoogle Scholar
Gauderman, W.J., Murcray, C., Gilliland, F., Conti, D.V.: Testing association between disease and multiple SNPs in a candidate gene. Genet. Epidemiol. 31(5), 383–395 (2007)Google Scholar
Han, F., Pan, W.: A data-adaptive sum test for disease association with multiple common or rare variants. Hum. Hered. 70(1), 42–54 (2010)MathSciNetGoogle Scholar
Jakobsdottir, J., McPeek, M.S.: Mastor: Mixed-model association mapping of quantitative traits in samples with related individuals. Am. J. Hum. Genet. 92, 652–666 (2013)Google Scholar
Jiang, D., McPeek, M.S.: Robust rare variant association testing for quantitative traits in samples with related individuals. Genet. Epidemiol. 38(1), 1–20 (2013)Google Scholar
Ladouceur, M., Dastani, Z., Aulchenko, Y.S., Greenwood, C.M., Richards, J.B.: The empirical power of rare variant association methods: Results from Sanger sequencing in 1998 individuals. PLoS Genet. 8(2), e1002,496 (2012)Google Scholar
Lee, S., Emond, M.J., Bamshad, M.J., Barnes, K.C., Rieder, M.J., Nickerson, D.A., NHLBI GO Exome Sequencing Project-ESP Lung Project Team, Christiani, D.C., Wurfel, M.M., Lin, X.: Optimal unified approach for rare-variant association testing with application to small-sample case-control whole-exome sequencing studies. Am. J. Hum. Genet. 91, 224–237 (2012)Google Scholar
Lee, S., Wu, M.C., Lin, X.: Optimal tests for rare variant effects in sequencing association studies. Biostatistics 13(4), 762–775 (2013)Google Scholar
Li, Q.H., Lagakos, S.W.: On the relationship between directional and omnibus statistical tests. Scand. J. Stat. 33, 239–246 (2006)MathSciNetzbMATHGoogle Scholar
Li, B., Leal, S.M.: Methods for detecting associations with rare variants for common diseases: application to analysis of sequence data. Am. J. Hum. Genet. 83, 311–321 (2008)Google Scholar
Li, M.X., Gui, H.S., Kwan, J.S., Sham, P.C.: GATES: a rapid and powerful gene-based association test using extended Simes procedure. Am. J. Hum. Genet. 88, 283–293 (2011)Google Scholar
Lin, D.Y., Tang, Z.Z.: A general framework for detecting disease associations with rare variants in sequencing studies. Am. J. Hum. Genet. 89, 354–367 (2011)Google Scholar
Liu, D.J., Leal, S.M.: A novel adaptive method for the analysis of next-generation sequencing data to detect complex trait associations with rare variants due to gene main effects and interactions. PLoS Genet. 6, e1001,156 (2010)Google Scholar
Ma, L., Clark, A.G., Keinan, A.: Gene-based testing of interactions in association studies of quantitative traits. PLoS Genet. 9, e1003,321 (2013)Google Scholar
Madsen, B.E., Browning, S.R.: A groupwise association test for rare mutations using a weighted sum statistic. PLoS Genet. 5, e1000,384 (2009)Google Scholar
Maier, K.G., Ruhle, B., Stein, J.J., Gentile, K.L., Middleton, F.A., Gahtan, V.: Thrombospondin-1 differentially regulates microRNAs in vascular smooth muscle cells. Mol. Cell. Biochem. 412(1–2), 111–117 (2016)Google Scholar
Manolio, T.A.: Genomewide association studies and assessment of the risk of disease. N. Engl. J. Med. 363(2), 166–176 (2010)Google Scholar
McCarthy, M.I., Abecasis, G.R., Cardon, L.R., Goldstein, D.B., Little, J., Ioannidis, J.P., Hirschhorn, J.N.: Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nat. Rev. Genet. 9(5), 356–369 (2008)Google Scholar
McPeek, M.S.: BLUP genotype imputation for case control association testing with related individuals and missing data. J. Comp. Biol. 19(6), 756–765 (2012)MathSciNetGoogle Scholar
McPeek, M.S., Wu, X., Ober, C.: Best linear unbiased allele-frequency estimation in complex pedigrees. Biometrics 60, 359–367 (2004)MathSciNetzbMATHGoogle Scholar
Morgenthaler, S., Thilly, W.G.: A strategy to discover genes that carry multi-allelic or mono-allelic risk for common diseases: a cohort allelic sums test (CAST). Mutat. Res. 615, 28–56 (2007)Google Scholar
Neale, B.M., Sham, P.C.: The future of association studies: Gene-based analysis and replication. Am. J. Hum. Genet. 75, 353–362 (2004)Google Scholar
Price, A.L., Kryukov, G.V., de Bakker, P.I., Purcell, S.M., Staples, J., Wei, L.J., Sunyaev, S.R.: Pooled association tests for rare variants in exon-resequencing studies. Am. J. Hum. Genet. 86, 832–838 (2010)Google Scholar
Price, A.L., Zaitlen, N.A., Reich, D., Patterson, N.: New approaches to population stratification in genome-wide association studies. Nat. Rev. Genet. 11(7), 459–463 (2011)Google Scholar
Schaid, D.J., McDonnell, S.K., Sinnwell, J.P., Thibodeau, S.M.: Multiple genetic variant association testing by collapsing and kernel methods with pedigree or population structured data. Genet. Epidemiol. 37(5), 409–418 (2013)Google Scholar
Schifano, E.D., Epstein, M.P., Bielak, L.F., Jhun, M.A., Kardia, S.L., Peyser, P.A., Lin, X.: SNP set association analysis for familial data. Genet. Epidemiol. 36(8), 797–810 (2012)Google Scholar
Sha, Q., Wang, X., Wang, X., Zhang, S.: Detecting association of rare and common variants by testing an optimally weighted combination of variants. Genet. Epidemiol. 36(6), 561–571 (2012)Google Scholar
Sha, Q., Zhang, S.: A novel test for testing the optimally weighted combination of rare and common variants based on data of parents and affected children. Genet. Epidemiol. 38(2), 135–143 (2014)Google Scholar
Splansky, G.L., et al.: The third generation cohort of the National Heart, Lung, and Blood Institute's Framingham Heart Study: design, recruitment, and initial examination. Am. J. Epidemiol. 165(11), 1328–1335 (2007)Google Scholar
Srivastava, M.S., von Rosen, T., von Rosen, D.: Models with a Kronecker product covariance structure: estimation and testing. Math. Methods Stat. 17(4), 357–370 (2008)MathSciNetzbMATHGoogle Scholar
The 1000 Genomes Project Consortium: A map of human genome variation from population-scale sequencing. Nature 467, 1061–1073 (2010)Google Scholar
Thornton, T., McPeek, M.S.: Case-control association testing with related individuals: a more powerful quasi-likelihood score test. Am. J. Hum. Genet. 81, 321–337 (2007)Google Scholar
Thornton, T., McPeek, M.S.: ROADTRIPS: Case-control association testing with partially or completely unknown population and pedigree structure. Am. J. Hum. Genet. 86, 172–184 (2010)Google Scholar
Tobin, M.D., Sheehan, N.A., Scurrah, K.J., Burton, P.R.: Adjusting for treatment effects in studies of quantitative traits: antihypertensive therapy and systolic blood pressure. Stat. Med. 24, 2911–2935 (2005)MathSciNetGoogle Scholar
Wang, Y., Chen, Y.H., Yang, Q.: Joint rare variant association test of the average and individual effects for sequencing studies. PLoS One 7, e32,485 (2012)Google Scholar
Wang, X., Morris, N.J., Zhu, X., Elston, R.C.: A variance component based multi-marker association test using family and unrelated data. BMC Genet. 14, 17 (2013)Google Scholar
Wang, X., Lee, S., Zhu, X., Redline, S., Lin, X.: GEE-based SNP set association test for continuous and discrete traits in family based association studies. Genet. Epidemiol. 37(8), 778–786 (2014)Google Scholar
Weisinger, G., Limor, R., Marcus-Perlman, Y., Knoll, E., Kohen, F., Schinder, V., Firer, M., Stern, N.: 12S-lipoxygenase protein associates with alpha-actin fibers in human umbilical artery vascular smooth muscle cells. Biochem. Biophys. Res. Commun. 356(3), 554–560 (2007)Google Scholar
Wu, M.C., Kraft, P., Epstein, M.P., Taylor, D.M., Chanock, S.J., Hunter, D.J., Lin, X.: Powerful SNP-set analysis for case-control genome-wide association studies. Am. J. Hum. Genet. 86, 929–942 (2010)Google Scholar
Wu, M.C., Lee, S., Cai, T., Li, Y., Boehnke, M., Lin, X.: Rare-variant association testing for sequencing data with the sequence kernel association test. Am. J. Hum. Genet. 89, 82–93 (2011)Google Scholar
Zhu, Y., Xiong, M.: Family-based association studies for next-generation sequencing. Am. J. Hum. Genet. 90, 1028–1045 (2012)Google Scholar
1.Department of StatisticsVirginia TechBlacksburgUSA
Wu X. (2019) A Powerful Retrospective Multiple Variant Association Test for Quantitative Traits by Borrowing Strength from Complex Genotypic Correlations. In: Zhang L., Chen DG., Jiang H., Li G., Quan H. (eds) Contemporary Biostatistics with Biopharmaceutical Applications. ICSA Book Series in Statistics. Springer, Cham | CommonCrawl |
3D Viewing: the Pinhole Camera Model
How a pinhole camera works (part 1)
A Virtual Pinhole Camera Model
Implementing a Virtual Pinhole Camera
In the first chapter of this lesson, we presented the principle of a pinhole camera. In this chapter we will show that the size of the photographic film on which the image is projected as well as the distance between the hole and the back side of the box also play an important role in the way a camera delivers images. Remember that one of the possible use of CGI is to combine CG images with live action footage. We need our virtual camera to deliver the exact same type of images than those delivered with a real camera so that images produced by both systems can be composited with each other seamlessly. In this chapter, we will use the pinhole camera model again, to study the effect of changing the film size and the distance between the photographic paper and the hole, on the image captured by the camera. In the next chapters, we will show how these different controls can be integrated to our virtual camera model.
Focal Length, Angle Of View and Field of View
Figure 1: the sphere projected on the image plane becomes bigger as the image planes moves away from the aperture (or smaller when the image plane get closer to the aperture). This equivalent to zooming in and out.
Figure 2: the focal length is the distance from the hole where light enters the camera to the image plane.
Figure 3: focal length is one the parameters that determines the value of the angle of view.
Similarly to real world cameras, our camera model will need a mechanism to control how much of the scene we see from a given point of view. Let's get back to our pinhole camera. We will call the back face of the camera, the face on which the image of the scene is projected, the image plane. Objects gets smaller and a larger portion of the scene is projected on this plane when you move it closer to the aperture: you zoom out. Moving the film plane away from the aperture has the opposite effect; a smaller portion of the scene is captured: you zoom in (as illustrated in figure 1). This feature can be described or defined in two ways: in terms of distance from the film plane to the aperture (you can change this distance to adjust how much of the scene you see on film). This distance is generally referred to as the focal length or focal distance (figure 2). Or you can also see this effect as varying the angle (making it larger or smaller) of the apex of a triangle defined by the aperture and the film edges (figure 3 and 4). This angle is called the angle of view or field of view (or AOV and FOV respectively).
Figure 4: the field of view can be defined as the angle of the triangle in the horizontal or vertical plane of the camera. The horizontal field of view varies with the width of the image plane, and the vertical field of view varies with the height of the image plane.
Figure 5: we can use Pythagorean trigonometric identities to find AC if we know both \(\theta\) (which is half the angle of view) and AB (which is the distance from the eye to the canvas).
Note that in 3D, the triangle defining how much we see of the scene can either be expressed by connecting the aperture to the top and bottom edges of the film or the aperture and the left and right edges of the film. To differentiate them, the first one is called the vertical field of view and the second the horizontal field of view (figure 4). Of course, there's no convention here again; each rendering API uses its own. OpenGL for example uses a vertical FOV while the RenderMan Interface uses a horizontal FOV.
As you may start to see by looking at figure 3, there is a direct relation between the focal length and the angle of view. If AB is the distance from the eye to the canvas (so far we always assumed that this distance was equal to 1, but this won't always be the case so we need to consider the generic case), AC is half the canvas size (either the width or the height of the canvas), and the angle \(\theta\) is half the angle of view, then because ABC is a right triangle, we can use Pythagorean trigonometric identities to find AC if we know both \(\theta\) and AB:
$$ \begin{array}{l} \tan(\theta) = \frac {BC}{AB} \\ BC = \tan(\theta) * AB\\ \text{Canvas Size } = 2 * \tan(\theta) * AB\\ \text{Canvas Size } = 2 * \tan(\theta) * \text{ Distance to Canvas }. \end{array} $$
This is an important relation because we now have a way of controlling the size of the objects in the camera's view by simply changing one parameter, the angle of view. As we just explained, changing the angle of view can be used to change the extent of a given scene that is imaged by a camera, an effect which is more commonly referred to in photography as zooming in or out.
Film Size Matters Too
Figure 6: a larger surface (in blue) captures a larger extent of the scene, than a smaller surface (in red). A relation exists between the size of the film and the camera angle of view. The smaller the surface, the smaller the angle of view.
Figure 7: if you use different film sizes but that your goal is to capture the exact same extent of a scene, you need to adjust the focal length (in this figure denoted by A and B).
You can see with figure 6, that how much of the scene we capture also depends on the film (or sensor) size. In photography, film size or image sensor size matters. A larger surface (in blue) captures a larger extent of the scene than a smaller surface (in red). Thus, a relation also exists between the size of the film and the camera angle of view. The smaller the surface, the smaller the angle of view (figure 6b).
Be careful. There's often a confusion between film size and image quality. There is a relation between the two of course. The motivation behind developing large formats whether in film or photography was mostly image quality. The larger the film, the more details and the better the image quality. However note that if you use films of different sizes but want to always capture the same extent of a scene, then you will need to adjust the focal length accordingly (as shown in figure 7). That is why a 35mm camera with a 50mm lens doesn't produce the same image than a large format camera with a 50mm lens (in which the film size is about at least three times larger than a 35mm film). The focal length in both case is the same, but because the film size is different, the angular extent of the scene imaged by the large format camera will be bigger than that imaged by the 35mm camera. It is very important to always keep in mind that the size of the surface capturing the image (wether in digital or film) also determines the angle of view (as well as the focal length).
The terms film back or film gate technically designate two things slightly different but they both relate to film size which is why the terms are used interchangeably. The first term relates to the film holder, a device which is placed generally at the back of the camera to hold the film. The second term designates a rectangular opening placed in front of the film. By changing the size of the gate, we can change the area of the 35 mm film exposed to light. This allows to effectively change the film format without having to change the camera or the film. CinemaScope and Widescreen are examples of formats shot on 35mm 4-perfs film with a film gate. Note that film gates are also used with digital film cameras. The film gate actually defines the film aspect ratio.
The 3D application Maya groups all these parameters in a section called Film Back. When you change the Film Gate parameter which can be any predefined film format such as 35mm Academy (the most common format used in film) or any custom format, it will change the value of a parameter called Camera Aperture which defines the horizontal and vertical dimension (in inch or mm) of the film. Right under the Camera Aperture parameter, you can see the Film Aspect Ratio, which is the ratio between the "physical" width of the film and its height. See list of film formats for a comprehensive table of known formats.
At the end of this chapter, we will talk about the relation between the film aspect ratio and the image aspect ratio.
It is very important to remember that two parameters determine the angle of view: the focal length and the film size. The angle of view changes when you change either one of these two parameters: the focal length or the film size.
For a fixed film size, changing the focal length will change the angle of view. The longer the focal length, the narrower than angle of view.
For a fixed focal length, changing the film size will change the angle of view. The larger the film, the wider the angle of view.
If you wish to change the film size but keep the same angle of view, you will need to adjust the focal length accordingly.
Figure 8: 70 mm (left) and 24x35 film (right).
Note that three parameters are inter-connected, the angle of view, the focal length and the film size. With two parameters we can always infer the third one. If you know the focal length and the film size, you can compute the angle of view. If you know the angle of view and the film size you can compute the focal length and so on. We will provide the mathematical equations and the code to compute these values in the next chapter. Though at the end, note that what we want is the angle of view. If you don't want to bother with the code and the equations to compute the angle of view from the film size and the focal length, you don't need to do so; you can directly provide your program with a value for the angle of view instead. However, in this lesson, our goal is to simulate a real physical camera, thus our model will effectively take into account both parameters.
The choice of a film format is generally a compromise between cost, workability of the camera (the larger the film the bigger the camera) and how much definition you need. The most common film format (known as the 135 camera film format) used for still photography was (and still is) 36 mm (1.4 in) wide (this file format is better known for being 24 by 35 mm however the exact horizontal size of the image is 36 mm). The next larger size of film for still cameras, is the medium format film which is larger than 35 mm (generally 6 by 7 cm), and the large format which refers to any imaging format of 4 by 5 inches or larger. Film formats used in filmmaking also come in a large variety of sizes. Do not assume though that because we now (mainly) use digital cameras, we should not be concerned by the size of the film anymore. Rather than the size of the film, it is the size of the sensor that we are going to be concerned about for digital cameras, and similarly to film, that size also defines the extent of the scene which is being captured. Not surprisingly, sensors you can find on high-end digital DLSR cameras (such as the Canon 1D or 5D) have the same size as the 135 film format: they are 36 mm wide, and have a height of 24 mm (figure 8).
Image Resolution and Frame Aspect Ratio
Figure 9: image sensor from a Leica camera. Its dimensions are 36 by 24 mm. Its resolution is 6000 by 4000 pixels.
Figure 10: some common image aspect ratios (the first two examples where common in the 1990s. Today most cameras or displays systems support 2K or 4K image resolutions).
The size of a film (which is measured in inches or millimetres) is not to be confused with the number of pixels in a digital image. The size of the film has an effect on the angle of view, but the image resolution (as in the numbers pixels in an image) doesn't. These two camera properties (how big is the image sensor and how many pixels fit on it) are independent of each other.
In digital cameras, film is replaced by a sensor. An image sensor is a device, that captures light and converts it into an image. You can think of the sensor as the electronic equivalent of film. Image quality depends not only on the size of the sensor, but also on how many millions of pixels fit on it. It is important to understand that the film size is equivalent to the sensor size and that it plays the exact same role in defining the angle of view (figure 9). However the number of pixels fitting on the sensor which defines the image resolution has no effect on the angle and is a concept purely specific to digital cameras. Pixel resolution (how many pixels fit on the sensor) only determines how good images look and nothing else.
The same concept applies to CG images. We can compute the same image with different image resolutions. These images will look the same (assuming a constant ratio of width to height) but those which have been rendered using higher resolutions will have more detail than those rendered at lower resolutions. The resolution of the frame is expressed in terms of pixels. We will use the term width and height resolution to denote the number of pixels our digital image will have along the horizontal and vertical dimension respectively. The image itself can be seen as a sort of gate (both the image and the film gate define a rectangle), and for this reason, it is referred to in Maya, as the resolution gate. We will study at the end of this chapter, what happens when the resolution and film gate relative size don't match.
One particular value we can compute from the image resolution is the image aspect ratio also called in CG the device aspect ratio. Image aspect ratio is measured as:
$$\text{Image (or Device) Aspect Ratio} = { width \over height }$$
When the width resolution is greater than the height resolution, the image aspect ratio is greater than 1 (and lower than 1 in the opposite case). In the real world, this value is important as most films or display devices such as computer screens or televisions do have standard aspect ratios. The most common aspect ratios are:
4:3. It was the aspect ratio of old television systems and computer monitors until about 2003; It is still often the default setting on digital cameras. While it seems like an old aspect ratio, this might be true for television screens and monitors, but this is not true for film. The 35mm film format has an aspect ratio of 4:3 (the dimension of one frame is 0.980x0.735 inches).
5:3 and 1.85:1. These are two very common standard image ratios used in film.
16:9. It is the standard image ratio used by high-definition television, monitors and laptops today (with a resolution of 1920x1080).
The RenderMan Interface specifications set the default image resolution to 640 by 480 pixels which gives a 4:3 Image aspect ratio.
Canvas Size and Image Resolution: Mind the Aspect Ratio!
Digital images have a particularity that physical film don't have. The aspect ratio of the sensor or the aspect ratio of what we called the canvas in the previous lesson (the 2D surface on which the image of a 3D scene is drawn) can be different from the aspect ratio of the digital image. You might think: "why would we ever want that anyway?". Generally indeed, this is not something we want and we are going to show why. And yet it happens more often than not. Film frames are often scanned with a gate different than the gate they were shot with, and this situation also arises when working with anamorphic formats (we will explain what anamorphic formats are later in this chapter).
Figure 11: if the image aspect ratio is different than the film size or film gate aspect ratio, the final image will be stretched in either x or y.
Before we consider the case of anamorphic format, let's first consider what happens when the canvas aspect ratio is different from the image or device aspect ratio. Let's take a simple example in which what we called the canvas in the previous lesson is a square, and in which the image on the canvas is that of a circle. We will also assume that the coordinates of the lower-left and upper-right corners of the canvas are [-1,1] and [1,1] respectively. Recall that the process for converting pixel coordinates from screen space to raster space, consists of first converting the pixel coordinates from screen space to NDC space, and then NDC space to raster space. In this process the NDC space is the space in which the canvas is remapped to a unit square. From there, this unit square is remapped to the final raster image space. Remapping our canvas from the range [-1,1] to the range [0,1] in x and y is simple enough. Note that both the canvas and the NDC "screen" are square (their aspect ratio is 1:1). Because the "image aspect ratio" is preserved in the conversion, the image is not stretched in either x or y (it's only squeezed down within a smaller "surface"). In other words, visually it simply means that if we were to look at the image in NDC space, our circle would still look like a circle. Let's imagine now that final image resolution in pixels is 640x480. What happens now? The image which originally had a 1:1 aspect ratio in screen space, is now remapped to a raster image with a 4:3 ratio. Our circle will be stretched along the x-axis looking more like an oval than a cicle (as depicted in figure 11). Not preserving the canvas aspect ratio and the raster image aspect ratio leads to stretching the image in either x or y. Note that it doesn't matter if the NDC space aspect ratio is different than the screen and raster image aspect ratio. You can very well remapped a rectangle to a square then a square back to a rectangle. All that matters is that both rectangles have the same aspect ratio (obviously, stretching is not something we want, unless the effect is desired as in the case of anamorphic format).
You may think again, "why would that ever happen anyway?". Generally it doesn't happen indeed, because as we will see in the next chapter, the canvas aspect ratio is often directly computed form the image aspect ratio. Thus if your image resolution is 640x480, we will set the canvas aspect ratio to 4:3.
Figure 12: when the resolution gate and the film gate are different (top), you need to choose between two possible options. You can either fit the resolution gate within the film gate (middle) or fit the film gate within the resolution gate (bottom). Note that the renders look different.
However you may as well compute the canvas aspect ratio from the film size (called Film Aperture in Maya) rather than the image size, and render the image with a resolution whose aspect ratio is different than that of the canvas. For example the dimension of a 35mm film format (also known as academy) is 22mm in width and 16mm in height (these numbers are generally given in inches) and the ratio of this format is 1.375. However a standard 2K scan of a full 35 mm film frame is 2048x1556 pixels which gives a device aspect ratio of 1.31. Thus, in this particular case, the canvas and the device aspect ratios are not the same! What happens then? A software like Maya offers the user different strategies to solve this problem. No matter what, Maya will force at render time your canvas ratio to be the same than your device aspect ratio, however this can be done in several ways:
You can either force the resolution gate within the film gate. This is known as the Fill mode in Maya.
Or you can force the film gate within the resolution gate. This is known as the Overscan mode in Maya.
Both modes are illustrated in figure 12. Note that if the resolution gate and the film gate are the same, switching between those modes has no effect. When they are different, objects in the overscan mode appears smaller than in the fill mode. We will implement this feature in our program (see the last two chapters of this lesson for more detail).
What do we do in film production? The Kodak standard for scanning a frame from a 35mm film in 2K is 2048x1556,. The resulting 1.31 aspect ratio is slightly lower than the actual film aspect ratio of a full aperture 35mm film which is 1.33 (the dimension of the frame is 0.980x0.735 inches). What this means is that, we actually scan slightly more of the film than what's strictly necessary in height (as shown in the adjacent image). Thus, if you set your camera aperture to "35mm Full Aperture", but render your CG renders at resolution 2048x1556 to match the resolution of your 2K scans, the resolution and film aspect ratio won't match. In this particular case, because the actual film gate during the scanning process fits within the resolution gate, you need to select the "Overscan" mode to render your CG images. This essentially means that you will render slightly more than you actually need at the top and at the bottom of the frame. Once your CG images are rendered, you will then be able to composite them to your 2K scan. But you will need to crop you composited images anyway to 2048x1536 to get back to a 1.33 aspect ratio if required (to match the 35mm Full Aperture ratio). Another solution is to scan your 2K images to exactly 2048x1536 (1.33 aspect ratio), which is another common choice. That way both the film gate and the resolution gate match.
The only exception to keeping the canvas and the image aspect ratio the same is when you work with anamorphic formats. The concept is simple. Traditional 35mm film cameras have a 1.375:1 gate ratio. In order to shoot with a widescreen ratio, you need to put a gate in front of the film (as shown in the adjacent image). What it means though is that part of the film is wasted. However, you can use a special lens called an anamorphic lens, which is going to compress the image horizontally, so that it fits within as much of the 1.375:1 gate ratio as possible. When the film is projected, another lens is used to stretch images back to their original proportions. The main benefit of shooting anamorphic is the increase in resolution (since the image uses a larger portion of the film). Typically anamorphic lenses squeeze the image by a factor of two. Star Wars (1977) for instance was filmed in 2.35:1 ratio using an anamorphic camera lens. If you were to composite CG renders into Star Wars footage, you would need to set the resolution gate aspect ratio to ~4:3 (the lens squeezes the image by a factor of 2; if the image ratio is 2:35 then the film ratio is closer to 1.175), and the "film" aspect ratio (the canvas aspect ratio) to 2.35:1. In CG this is typically done by changing what we call the pixel aspect ratio. In Maya, there is also a parameter in the camera controls called Lens Squeeze Ratio, which has the same effect. This is an advanced topic that we won't be discussing any further in this lesson.
Conclusion & Summary: Everything You Need to Know about Cameras
What is really important to remember from the last chapter is that all that matters at the end really is the angle of view of the camera. You can just set its value directly to get the visual result you want.
"I want to combine real film footage with cg elements. The real footage is shot and loaded into Maya as image plane. Now I want set up the camera (manually) and create some rough 3D surroundings. I noted down a couple of camera parameters during the shooting and tried to feed them into Maya, but it doesn't work out. If I enter the focal length the resulting field of view is way too big. I'm not too familiar with the relations between focal length, film gate, field of view etc. How do you tune a camera in maya to match a real camera? How should I tune a camera to match these settings?"
However If you wish to build a camera model to simulate physical cameras (which is the goal of the person we quoted above), you will need to compute the angle of view by taking into account the focal length and the film gate size. Many applications such as Maya expose these controls (the image below is a screenshot of Maya's UI showing the Render Settings and the Camera attributes). You now hopefully, understand exactly why they are there, what they do and how to set their value to match the result produced by a real camera. If your goal is to combine CG images to live action footage, you will need to know:
The film gate size. This information is generally given in inches or mm. This information is always available in camera specifications.
The focal length. Keep in mind that for a given focal length, the angle of view depends on film size. In other words, if you set the focal length to a given value but change the film aperture, the object size will change in the camera's view.
However, remember that the resolution gate ratio may be different than the film gate ratio, which you never want unless you work with anamorphic formats. If the resolution gate ratio of your scan is smaller than the film gate ratio, you will need to set the Fit Resolution Gate parameter to Overscan as with the example of 2K scans of 35mm full aperture film, whose ratio (1.316:1) is smaller than the actual frame ratio (1.375:1). You need to pay a great deal of attention to this detail if you want CG renders to match the footage.
Finally, the only time when the "film gate ratio" can be different from the "resolution gate ratio" is when you work with anamorphic formats (which is though quite rare).
We now ready to develop a model of virtual camera capable of producing images that match the output of real world pinhole cameras. In the next chapter, we will show that the angle of view is the only thing we need if we use ray tracing. If we use the rasterisation algorithm though, we will need to compute both the angle of view and the canvas size. We will explain why we need these values in the next chapter and how we can compute them in chapter four.
arrow_backPrevious Chapter | CommonCrawl |
\begin{definition}[Definition:Set Theory]
'''Set Theory''' is the branch of mathematics which studies sets.
There are several "versions" of set theory, all of which share the same basic ideas but whose foundations are completely different.
\end{definition} | ProofWiki |
Regional Environmental Change
Grain, meat and vegetables to feed Paris: where did and do they come from? Localising Paris food supply areas from the eighteenth to the twenty-first century
G. Billen
S. Barles
P. Chatzimpiros
J. Garnier
First Online: 07 July 2011
The food supply to a large metropolis such as Paris involves huge fluxes of goods, which considerably impact the surrounding rural territories. Here, we present an attempt to localise Paris food supply areas, over a period of two centuries (1786, 1886, 2006), based on the analysis of data from transportation and production statistics for cereals, animal products, and fruits and vegetables, all three categories being expressed in terms of their nitrogen (i.e. protein) content. The results show contrasting trends for the three types of agricultural products. As for cereals, the Paris supply area remained for the most part restricted to the central area of the Paris basin, a region which gradually became specialised in intensive cereal production. Conversely, as animal farming had been progressively excluded from this area, regions located west and north of Paris (Brittany, Normandy, Nord-Pas-de-Calais) gradually dominated the supply of animal products to the metropolis. In addition, imported feed from South America today contributes as much as one-third of the total ration of feed in the livestock raised in these regions. For fruits and vegetables, about one-half of the Paris supply currently comes from long-distance imports, the other half coming from areas less than 200 km from Paris. As a whole, the Paris food supply area, although it has obviously enlarged in recent periods, is still anchored to an unexpected extent (about 50%) in its traditional nearby hinterland roughly coinciding with the Seine watershed, and in the regions specialised in animal farming located west and north. On the other hand, the agricultural system of the main food supply areas has considerably opened to global markets.
Urban footprint Nitrogen Paris metropolitan area Food Supply areas Hinterland
Meeting the food demand of a large city is obviously one of the primary concerns of urban management and is also a major driver of the influence a city exerts on surrounding rural territories, by perturbing the cycle of major nutrients, including carbon, nitrogen and phosphorus. The food supply of some cities is ensured by very distant areas, as was the case of Rome in Antiquity (Morley 1996), but also of London in more recent times (Peet 1969). In this respect, Paris offers the example of a city that depended longer on a rather close hinterland for its food supply (Billen et al. 2009), mostly located within the geologically defined sedimentary Paris basin. The Paris metropolitan area has increased its population by a factor of 20 since the end of the eighteenth century. The resulting growth in food demand must therefore have exerted a considerable influence on the surrounding supply areas. We have already shown that the demographical development of the city was paralleled by profound mutations in the agricultural sector of the Paris basin, which first made it able to follow the urban food demand and then, more recently, to increase its potential for commercial export far beyond the needs of the local urban market (Billen et al. 2009). We did not assess, however, the effective location of the areas participating in the Paris food supply over this long period. The present work is an attempt to do so for three time points one century apart (1786, 1896, 2006) selected for data availability. The results of this assessment provide insight into the long-term changes in the relationships between a large city and its rural hinterland in the context of rapid development of transport infrastructures and economic globalisation.
Defining the Paris agglomeration
The precise definition of an urban agglomeration is somewhat arbitrary, because it necessarily mixes operational considerations related to the existence of statistical data collected according to administrative units with conceptual considerations on the distinction between urban and rural territories. At the end of the eighteenth century, most of the urban population lived intra muros so it is easier to give an estimate than in present times. Official censuses only began in 1801, providing a figure of 550,000 for Paris (3,370 ha) and less than 50,000 for the rest of the Seine département in which Paris was enclosed. The population before the French Revolution (1789) might have been slightly higher; Fierro (1996) gives the figure of 600,000 inhabitants for Paris in 1780, and Favier (1997) suggests the figure of 700,000 inhabitants for Greater Paris, including Versailles. We chose the last figure for this study, as it is more in line with what we would presently define as the urban agglomeration.
One century later, the Paris agglomeration was quite different. Paris itself covered a wider area (7,800 ha) and had 2,536,834 inhabitants. Its suburbs had developed tremendously. Dupeux (1981) has calculated the past evolution of Paris and its suburbs. According to his assumptions, a municipality belongs to the Paris metropolitan area when it has at least 3,000 inhabitants and is contiguous to Paris or to another contiguous municipality. Using this definition, the population of the metropolitan area reached 2,984,097 inhabitants in 1891 and 3,500,617 in 1901, very close to the population of the Seine département. Most municipalities that were included in the metropolitan area according to Dupeux were indeed within the Seine département. We therefore matched the population of Paris and its suburbs in 1896 to that of the Seine département (3,340,514 inhabitants).
Today, the official limits of the Paris metropolitan area as established by INSEE (2007) are based on a building continuity criterion and include all towns of more than 2000 inhabitants with at least half their population living in the urbanised area. So defined, present (2006) metropolitan Paris totals 10,197,678 inhabitants (INSEE 2007). This accounts for 88% of the total population of the Île-de-France region, comprising the four administrative départements previously forming the Seine département and the more rural départements of Val-d'Oise, Yvelines, Essonne and Seine-et-Marne. As most of the data used in this paper concern départements and the region, we assume that, in 2006, the population of the metropolitan area equalled the population of the Île-de-France region (11,532,000 inhabitants in 2006).
The Parisian diet
Calculating Paris's food requirements requires knowledge of the per capita diet and its modifications over the two centuries covered by this study. Several studies have investigated this subject (Barles 2007; Billen et al. 2007, 2009) and are summarised in Table 1. To compare their nutritional value, we express all food items in terms of their nitrogen (i.e. protein) content. Other choices could have been made, e.g. by expressing all figures in terms of calories, but we preferred nitrogen both because of the central role of proteins in human nutrition and because of this study's interest in the nitrogen issue (Sutton et al. 2011). The conversion factors used (1.8%N for cereals, 3.4%N for meat, 0.5% for milk and 0.1–0.4% for most fruits and vegetables) were presented in Billen et al. (2009). We recognise that grouping all food items into three categories (grain, animal products and fruits and vegetables) is a simplification that masks a much larger variability of properties, nitrogen content, economical values and origins. This was required, however, to provide an overall view of the Paris food supply.
Per capita food consumption of Paris inhabitants, in terms of nitrogen content
gN/inhab/day
Bread and other cereals products
Milk, cheese and eggs
Total (gN/inhab/day)
Over two centuries, these data reveal a major decreasing trend in the share of cereals in the total diet, from 57 to 17%, replaced by animal products, which increased from 39 to 69%, and fruits and vegetables, increasing from 4 to 14%.
Tracing the origin of food imports
The extensive study by Abad (2002) on the Paris food supply at the end of the eighteenth century was used as the baseline in the present study. In addition to the data from the city toll statistics (which do not include information on their source), Abad provides a detailed account of the province of origin of most food items consumed in Paris, based on a careful historical enquiry.
For 1896, we used the Rapport annuel de l'année 1896 sur les services municipaux de l'approvisionnement de Paris edited by the Supply Department of the Seine département (Bureau de l'Approvisionnement 1897) providing the amounts of grain delivered to Paris by waterways, road and the six main railway companies, namely the State Company and the Compagnies du Nord, de l'Est, de l'Ouest, d'Orléans, and Paris-Lyon-Méditerranée. Knowing the départements served by each of these companies (Jouanne 1859), the amount originating from each département was calculated pro rata with their respective potential for export, as determined from agricultural statistics (Ministère du Commerce et de l'Industrie 1897). For meat, the Statistique agricole annuelle (1897) was used: this yearly publication mentioned the origin of the animals imported into Paris (providing beef, mutton and pork).
For 2006, the SitraM database (Système d'Information sur les Transports de Marchandises) established by the French Ministry of Transport (MEDDEM, www.statistiques.equipement.gouv.fr/) was used, providing a detailed matrix of transportation fluxes (by road, rail, air and water) between all French départements and foreign countries, according to a commodity code distinguishing 176 items, including more than 50 agricultural goods. Let us call this matrix {Ι(d, o)} for a given commodity and a given year, with d referring to destination and o to origin. Internal transport fluxes Ι(i, i) are not considered but are replaced by the domestic production of the commodity, extracted from national agricultural statistics (Agreste 2006). Let this modified matrix be called {Ι *(d, o)}. We assume that the distribution by origin of internal consumption is the same as that of imports plus internal production, even if a significant proportion of imports is re-exported. This 'perfect mixing' assumption is obviously a rough but unavoidable idealisation, which implies that any 'local preference' in consumption is ignored. Using a first-order procedure, the distribution of origins can be calculated as:
$$ \left\{ {\user2{r}_{1} \left( {\user2{a},\user2{i}} \right)} \right\} = \left\{ {\user2{I}^{ * } {{\left( {\user2{a},\user2{i}} \right)} \mathord{\left/ {\vphantom {{\left( {\user2{a},\user2{i}} \right)} {\sum\nolimits_{{_{{j = \;{\text{all}}\;o}} }} {\left( {\user2{I}^{ * } \left( {\user2{a},\user2{j}} \right)} \right)} }}} \right. \kern-\nulldelimiterspace} {\sum\nolimits_{{_{{j = \;{\text{all}}\;o}} }} {\left( {\user2{I}^{ * } \left( {\user2{a},\user2{j}} \right)} \right)} }}} \right\}\quad {\text{with}}\;0 \le r_{1} \le 1\;{\text{and}}\;\sum\nolimits_{{_{i} }} {\user2{r}_{1} (\user2{a},\user2{i})} = 1. $$
In a second-order step, because the SitraM database refers to origin defined as the place of last loading, possible previous origins of the commodity should be taken into account (Fig. 1). For the secondary origins for foreign countries, not covered in the SitraM database, FAO trade statistics were used. Here again, perfect mixing is assumed, and the flux from any origin is given by the contribution of this origin to the total imported fluxes (Fig. 1):
$$ \left\{ {\user2{r}_{2} \left( {\user2{a},\user2{i}} \right)} \right\} = \left\{ {{{\left[ {\sum\nolimits_{{_{k = all \, o} }} {\left( {\user2{I}^{*} \left( {\user2{a},\user2{k}} \right) \cdot \user2{r}_{1} \left( {\user2{k},\user2{i}} \right)} \right)} } \right]} \mathord{\left/ {\vphantom {{\left[ {\sum\nolimits_{{_{k = all \, o} }} {\left( {\user2{I}^{*} \left( {\user2{a},\user2{k}} \right) \cdot \user2{r}_{1} \left( {\user2{k},\user2{i}} \right)} \right)} } \right]} {\sum\nolimits_{{_{{j = {\text{all }}\,o}} }} {\left( {\user2{I}^{*} \left( {\user2{a},\user2{j}} \right)} \right)} }}} \right. \kern-\nulldelimiterspace} {\sum\nolimits_{{_{{j = {\text{all }}o}} }} {\left( {\user2{I}^{*} \left( {\user2{a},\user2{j}} \right)} \right)} }}} \right\}. $$
Diagram showing the logic behind the calculation of the distribution of the origins of commodities from the transport flux matrix between supply areas (French départements and foreign countries) represented here as blue ellipses. In the example, territory 3, since it does not directly export to territory 0, is ignored in a first-order approach, while the contribution of origin 1 and 3 are overestimated. At the second-order approach, the contribution of territory 3 in the fluxes I(0,1) and I(0,2) is considered
In practice, we used the second-order procedure only for those origins contributing more than 5% to the first order.
The Paris food supply areas in 1786
At the end of the eighteenth century, Paris was already a huge city of about 700,000 inhabitants. Surprisingly, the results reported by Abad (2002), when converted in terms of N content, indicate that, in spite of a food market extending over the entire Kingdom of France and beyond, Paris kept most of its 'foodprint' within the limits of the Seine watershed, i.e. within a radius of about 150 km (Billen et al. 2009) (Fig. 2a–c). The grain supply mostly came from the Île-de-France, Champagne and Brie regions, with an estimated weighted distance of 110 km. Meat originated from more distant regions, including the Normandy and Marche-Limousin provinces, where animals were often fattened and walked to the capital. The weighted mean supply distance was approximately 255 km. Fruits and vegetables were for the most part brought from closer regions, with an estimated supply distance of 87 km.
Contribution of the French provinces to the Paris food supply at the end of the eighteenth century, according to the figures assembled by Abad (2002) a for bread and cereals, b meat and c fruits and vegetables, all in terms of N content. The distribution of supply distances is also indicated
As already discussed by Billen et al. (2009), these results are consistent with the estimated potential for exportation of the agrarian systems of the end of the eighteenth century. With a mean value of 50 kgN/km²/year for this potential, the above-cited 150 km radius would be enough to feed a city of 700,000 inhabitants, consuming as a mean 5 kgN/capita/year.
As discussed in earlier reports (Barles 2007; Billen et al. 2009; Kim and Barles 2011), the same territory also provided most of Paris's energy supply, as wood (mainly for heating, cooking and baking) and forage (mainly for feeding horses). Wood was for the most part floated from Morvan forest areas and accounted for about 1.5 kgN/capita/day. Hay came largely from the alluvial Seine valley upstream from Paris and, together with other forage, totalled about 1.7 kgN/capita/year.
The nineteenth century was marked by a fivefold increase in the Paris population, which reached 3,340,000 by 1896. Two other major changes occurred, with opposite potential effects on the city's food supply. First, transport infrastructures developed considerably, making commodity supply from more distant regions possible. The course of the Seine and its main tributaries (the Marne and the Oise) were canalised and regulated for navigation in all seasons; new channels were dug linking the Seine watershed to the Scheldt, the Rhine and the Rhone (Mouchel et al. 1998). Railway development was even more spectacular: begun in the 1840s, the networking of the entire country was completed by the end of the century, according to a radial scheme centred on Paris (Jouanne 1859). For grain, available transport statistics indicate that railways delivered 86% of the total supply to the capital, while waterways and roads contributed 12 and 2%, respectively. Cattle transportation by railway also became the norm. On the other hand, agricultural productivity increased considerably, particularly in the Paris basin, owing to the progressive replacement of the triennial fallow with legume fodder crops and the increase in livestock densities, providing more abundant manure resources (Mazoyer and Roudart 1998; Billen et al. 2009). The potential for commercial grain export from rural areas, as calculated on a département basis from agricultural statistics (production of wheat and rye minus estimated consumption by the département's population), is shown in Fig. 3. Values close to or higher than 500 kgN/km²/year were observed in the regions surrounding Paris. With such agricultural productivity, the 150 km radius mentioned above would suffice to feed 5,000,000 urban inhabitants with the diet of the time. Grain deficits were logically observed in the départements with a large city (Paris, Lyon, Bordeaux, Marseille, Lille), as well as in Normandy, Brittany and the southernmost départements.
Potential for grain export (excess wheat and rye production over consumption by local population, expressed in kgN per km² territory annually) calculated by départements from 1892 agricultural statistics (Ministère du Commerce et de l'Industrie 1896). The chequered départements are deficit areas
Combining transport statistics (Bureau de l'Approvisionnement 1897) with potential export by département allowed us to calculate the likely distribution of Paris grain supply by supply area (Fig. 4). The result shows that although the supply area had extended since the end of the eighteenth century, the Paris basin still met the bulk of the city's grain demand at the end of the nineteenth century. The weighted mean transportation distance was 177 km. As far as meat and milk products are concerned, the situation was quite similar: only a limited extension of the supply areas occurred compared to the end of the eighteenth century, the supply distance increasing from 255 to 325 km. Regarding fruits and vegetables, the official Rapport sur l'Approvisionnement de Paris (1896) indicates that 82% of the quantities transported to Les Halles (Paris's central food market) came from the Île-de-France region either by cart or by a regional railway line. The remaining 18% of the supply came from other French départements, either by train or fluvial transport. The map in Fig. 4c was drawn by combining agricultural production statistics and transport data. The average supply distance was about 97 km.
Contribution of the French départements to Paris supply of a grain, b meat and milk products and c fruits and vegetables at the end of the nineteenth century. The distribution of supply distances is also indicated
During the twentieth century, the Paris metropolitan area more than tripled, reaching 11,532,000 inhabitants by 2006. At the same time, agricultural productivity underwent an unprecedented increase owing to the generalisation of synthetic nitrogen fertiliser use. Because manure and atmospheric nitrogen-fixing crops ceased to be the sole sources of cropland fertilisation, livestock farming and crop farming were completely separated, resulting in a strong geographical specialisation: livestock left the central Paris basin, while peripheral regions such as Normandy, Brittany to the west and Flanders to the north specialised in animal farming, importing a significant part of their feed. Figure 5 illustrates the results of these changes. Cereal production in the central Paris basin was over 4,000 kgN/km²/year. Productivity levels over 1,000 kgN/km²/year were also found for meat and milk or vegetable production in the corresponding specialised areas. Obviously, Paris could not remain the major market absorbing the agricultural production of these regions.
Territorial productivity (i.e. expressed in kgN per km² of total territory annually) of cereals, animal products, and fruits and vegetables in the French départements in 2006. (Agreste 2006)
Simultaneously, transport infrastructures also developed tremendously. Road transportation largely supplanted the railway in national transport; the former supplied 75% of total final food delivery to Paris, whereas the latter provided only 18%, with waterways transporting 7%. Long-distance international marine traffic using container cargos also developed, now connecting Paris at very low cost to most of the world within only a few days (Williams 2009).
The analysis of transport statistics shows that the Paris food supply areas at the beginning of the twenty-first century had obviously extended in space but had remained closer than expected to its traditional hinterland (Fig. 6a–c). For cereals, the contribution of the Paris basin remained as high as 70%, in spite of significant long-distance imports, mainly from Italy. The weighted mean supply distance was 492 km. For animal products, the French regions specialised in livestock farming: the areas west and north of Paris together contributed 50% of the supply; longer-distance supply, mostly from Europe, was significant. The mean supply distance was 660 km, almost twice the distance of one century before. For fruit and vegetables also, more than 50% of the Paris supply in terms of N content came from the Paris basin; the other half originated from much more dispersed and remote areas, extending to the African continent. The mean supply distance was 790 km, in contrast to less than 100 km in 1896.
Contribution of the French departments and foreign countries to the supply of Paris in cereals (a), animal products (b) and fruits/vegetables (c) in 2006. The distribution of supply distances is also indicated
In total, summing up all contributions to the Parisian diet (excluding fish), the mean weighted supply distance was about 660 km (Fig. 7). The departments within the Seine watershed (Paris's traditional food supply hinterland) still contributed 54% of the total proteins and 63% of plant proteins consumed by the city, while the adjoining territory formed by Brittany, Normandy and Nord-Pas-de-Calais contributed most of the animal protein supply. These two territories together provided 70% of the Paris food demand, while the remaining 30% came equally from other French départements and foreign countries.
Contribution of the French départements and foreign countries to the total protein supply of Paris in 2006. The distribution of weighted supply distances is also indicated
Long-term evolution of Paris food supply areas
The major findings of this study are summarised in Table 2. They show that over the last two centuries, in spite of a considerable increase in the distances from which food is brought to the metropolis, the traditional food supply area of Paris, approximately a 150 km radius around the city, which roughly coincides with the Seine watershed, remained more significant in feeding the city than the changes in agriculture and transport infrastructures would have led one to expect. Of the total supply of cereals, 75% continues to come from the intensive agricultural areas of the central Paris basin, which, however, exports most of its huge production to foreign markets. In spite of a large diversity of origins throughout the world, most particularly Africa, approximately 50% of fruits and vegetables still come from areas located less than 150 km from Paris. However, as animal husbandry has been gradually banned from the central Paris basin, now mostly devoted to cereal production, the provision of animal products to Paris mostly originates from the contiguous West and North regions of France, which have specialised in intensive livestock farming.
Metropolitan Paris population, consumption of cereals, animal products and vegetables, and corresponding supply distances in 1786, 1896 and 2006
Metropolitan Paris population, inhabitants
Total food consumption (including fish), ktonN/year
Weighted mean import distance, km
Paris agglomeration consumption, ktonN/year
Animal products (excl. fish)
Paris agglomeration consumption, kTN/year
The biogeochemical functioning of the present main food supply area for Paris
The regional specialisation of agriculture in either crop or animal farming has resulted in the separation of Paris's main food supply into two territories with distinct agricultural orientations (Fig. 8). The first one, roughly corresponding to the Seine watershed, is characterised by the dominance of cereal crops, grown with intensive application of synthetic fertilisers; it provides most plant products consumed by the city and still exports more than 80% of its production. However, it supplies less than 20% of the population's requirements of animal products. By contrast, the second territory, corresponding to Brittany, Normandy and Nord-Pas-de-Calais, which supplies most of the meat and milk requirements of the Paris metropolitan area, is characterised by a very high livestock density, and its nitrogen cycle is dominated by the huge fluxes associated with livestock feeding and excretion. A large part of animal feed is made of imported soybean and soybean oilcake meals from Brazil and Argentina. The analysis of the SitraM database estimates these imports at 300 ktonN/year. The calculation based on the current livestock feed formula (Chatzimpiros and Barles 2010) yields a lower figure (160 ktonN/year), which remains quite high in terms of the ratio of imported to locally produced livestock protein, however. Taking into account the soybean yield in South America (about 2.5 ton/ha/year, i.e. 88 kgN/ha/year FAOstat 2006), and after allocating the land requirements to grow soybean between the meal and the oil derivatives (Chatzimpiros and Barles 2010), these imports correspond to an agricultural area of about 17,000–30,000 km², a territory comparable in size to the one considered here.
The nitrogen cycle in Paris food supply areas
Our paradoxical conclusion is therefore that the enlargement of Paris foodprint is less related to the extension of the direct supply areas than to the opening and specialisation of French agriculture. Except for a number of exotic products contributing only marginally to the total food intake, and as far as this remained possible in view of the specialisation of the agricultural system itself, Paris food consumption seems to have remained anchored to a large extent in its traditional hinterland, even though the corresponding rural territory has turned away from its privileged relationship with the city and resolutely entered a globalisation dynamic. French agriculture has opened to the world, while Paris remains supplied to an unexpected extent by its surrounding areas.
We currently lack precise data to compare our analysis of the Paris case study with that of other megalopolises in the world. The fact that Paris, like Vienna, Madrid and Moscow, is a continental city with, until recently, limited access to the sea, might have been a factor for the observation that the city's food supply is solidly anchored in the surrounding farmlands. By comparison, London, Barcelona, Amsterdam and all large US east coast cities might well more easily rely on distant overseas supply areas. It is also clear that the achievement of food security has long been a political priority for French authorities, whatever the successive political regimes. Examples of political actions taken to develop productive national agriculture in France are facilitation of wide access to land ownership by sale of new National Goods after the 1789 French Revolution, strict protectionist policies against cereal imports throughout the nineteenth century, and creation of a powerful Ministry of Agriculture in 1881 during the 3rd Republic (Hervieu 2008), which still exists today and remains separate from the Ministry in charge of environmental issues. All these factors might explain the perhaps exceptionally close food supply of Paris. Given the role played by food in shaping our lives and cities, as convincingly demonstrated by architect Carolyn Steel (2008) in her book Hungry City, this may be a considerable asset for Paris in terms of environmental sustainability.
As we have shown, the main causes of enlargement of the environmental imprint of Paris food consumption lie in the evolution of the agricultural sector itself, with (1) the regional specialisation to grain monoculture in the centre of the Paris basin and livestock farming in the northwest of France and (2) the massive shift of livestock farming to imported feed. The increased proportion of animal products in the human diet, which increased from 39 to 69% of the total protein intake, has been a strong incentive for this evolution.
This study was conducted within the Interdisciplinary Research Programme on City and Environment (PIRVE) directed by the French Ministry of Ecology, Sustainable Development, Transport and Housing (MEDDATT) and the CNRS, the ANR 'Ville Durable' Programme CONFLUENT and the 'Paris 2030' programme funded by the Ville de Paris. It is part of the activities coordinated by the FIRE (Fédération Ile-de-France de Recherches sur l'Environnement) and the PIREN-Seine Programme.
Abad R (2002) Le Grand Marché: l'approvisionnement de Paris sous l'Ancien Régime. Fayard, ParisGoogle Scholar
Agreste (2006) http://www.agreste.agriculture.gouv.fr/
Barles S (2007) Feeding the city: food consumption and flow of nitrogen, Paris, 1801–1914. Sci Total Environ 275:48–58Google Scholar
Billen G, Garnier J, Nemery J, Sebilo M, Sferratore A, Benoit P, Barles S, Benoit M (2007) A long term view of nutrient transfers through the Seine river continuum. Sci Total Environ 275:80–97Google Scholar
Billen G, Barles S, Garnier J, Rouillard J, Benoit P (2009) The food-print of Paris: long term reconstruction of the nitrogen flows imported to the city from its rural Hinterland. Reg Environ Change 9:13–24CrossRefGoogle Scholar
Bureau de l'approvisionnement (Préfecture du Département de la Seine) (1897) Rapport annuel de l'année 1896 sur les services municipaux de l'approvisionnement de Paris. Préfecture du département de la Seine, ParisGoogle Scholar
Chatzimpiros P, Barles S (2010) Nitrogen, land and water inputs in changing cattle farming systems. A historical comparison for France, 19th–21st centuries. Sci Total Environ 408:4644–4653CrossRefGoogle Scholar
Dupeux G (1981) Atlas historique de l'urbanisation en France (1811–1975). éditions du CNRS, ParisGoogle Scholar
FAOstat (2006) http://faostat.fao.org/
Favier J (1997) Paris, deux mille ans d'histoire. Fayard, ParisGoogle Scholar
Fierro A (1996) Histoire et dictionnaire de Paris. Robert Laffont, ParisGoogle Scholar
Hervieu B (2008) Les orphelins de l'exode rural; Essai sur l'agriculture et les campagnes du XXIe siècle. Editions de l'AubeGoogle Scholar
INSEE (2007) http://www.insee.fr/fr/methodes/default.asp?page=definitions/unite-urbaine.htm
Jouanne A (1859) Atlas historique et statistique des chemins de fer français. Hachette, ParisGoogle Scholar
Kim E, Barles S (2011) The energy consumption of Paris and its supply areas from eighteenth century to present. Reg Environ Change (in press)Google Scholar
Mazoyer M, Roudart L (1998) Histoire des agricultures du monde. Du Néolithique à la crise contemporaine. Seuil, ParisGoogle Scholar
Ministère du Commerce et de l'Industrie (1897) Annuaire statistique de la France, 1895–1896. Imprimerie Nationale, ParisGoogle Scholar
Morley NDG (1996) Metropolis and Hinterland. The city of Rome and the Italian economy, 200 B.C.–A.D. 200. Cambridge University Press, CambridgeCrossRefGoogle Scholar
Mouchel JM, Boët P, Hubert G, Guerrini M-C (1998) Un bassin et des hommes: une histoire tourmentée. In: Meybeck M, de Marsily G, Fustec E (eds) La Seine en son Bassin. Elsevier, Paris. pp 77–125Google Scholar
Peet JR (1969) The spatial expansion of commercial agriculture in the nineteenth century: a Von Thunen interpretation. Econ Geogr 45:283–301CrossRefGoogle Scholar
Steel C (2008) Hungry city. How food shapes our lives. Chatto & Windus, Vintage, LondonGoogle Scholar
Sutton M, Howard C, Erisman JW, Billen G, Bleeker A, Grennfelt P, van Grinsven H, Grizzetti B (eds) (2011) The European nitrogen assessment: sources, effects and policy perspectives. Cambridge University PressGoogle Scholar
Williams C (2009) Where's the remotest place on Earth? New Sci 2704:40–43Google Scholar
1.UPMC-CNRS, UMR SisypheParisFrance
2.LATTSMarne-la-ValléeFrance
Billen, G., Barles, S., Chatzimpiros, P. et al. Reg Environ Change (2012) 12: 325. https://doi.org/10.1007/s10113-011-0244-7
Accepted 24 June 2011
First Online 07 July 2011 | CommonCrawl |
Size selectivity in antibiofilm activity of 3-(diphenylphosphino)propanoic acid coated gold nanomaterials against Gram-positive Staphylococcus aureus and Streptococcus mutans
Dania Ahmed1 na1,
Ayaz Anwar1,2 na1,
Anum Khalid Khan3,
Ayaz Ahmed3,
Muhammad Raza Shah1 &
Naveed Ahmed Khan2
AMB Express volume 7, Article number: 210 (2017) Cite this article
Biofilm formation by pathogenic bacteria is one of the major threats in hospital related infections, hence inhibiting and eradicating biofilms has become a primary target for developing new anti-infection approaches. The present study was aimed to develop novel antibiofilm agents against two Gram-positive bacteria; Staphylococcus aureus (ATCC 43300) and Streptococcus mutans (ATCC 25175) using gold nanomaterials conjugated with 3-(diphenylphosphino)propionic acid (Au-LPa). Gold nanomaterials with different sizes as 2–3 nm small and 9–90 nm (50 nm average size) large were stabilized by LPa via different chemical synthetic strategies. The nanomaterials were fully characterized using atomic force microscope (AFM), transmission electron microscope, ultraviolet–visible absorption spectroscopy, and Fourier transformation infrared spectroscopy. Antibiofilm activity of Au-LPa nanomaterials was tested using LPa alone, Au-LPa and unprotected gold nanomaterials against the both biofilm-producing bacteria. The results showed that LPa alone did not inhibit biofilm formation to a significant extent below 0.025 mM, while conjugation with gold nanomaterials displayed manifold enhanced antibiofilm potential against both strains. Moreover, it was also observed that the antibiofilm potency of the Au-LPa nanomaterials varies with size variations of nanomaterials. AFM analysis of biofilms further complemented the assay results and provided morphological aspects of the antibiofilm action of Au-LPa nanomaterials.
Nanotechnology offers exceptional approaches towards controlling a variety of pivotal biological processes and is believed to have an influence on several biological systems since they also occur at the nanometer dimension. Applications of nanotechnology in medicine are immense and have paved the way for the development of new and effective medical treatments (i.e., nanomedicine) (Emerich and Thanos 2003). The monitoring of size and structure at the nanoscale ensures advantages of nano-medicines over conventional therapies due to targeted drug delivery, enhanced bioavailability and bio-conjugation (Geethalakshmi and Sarada 2013).
Microorganisms in general possess an extraordinary ability to settle and survive in precisely programmed regions of hosts and cause malfunction in biological routine. On the other hand, bacterial infections due to emerging multidrug-resistant (MDR) and the lack of development of new and effective drugs represent a devastating problem in healthcare. MDR bacteria contribute to morbidity and mortality rates in many of the common infectious diseases (Franci et al. 2015). Staphylococcus aureus is known to exfoliate the epidermal layers and localizes within the skin causing wound infections, while it can also penetrate lungs, bloodstream, joints, and bones in some extreme cases (Chwalibog et al. 2010). Whereas, production of extracellular polysaccharides and acids from dietary sugar and adhesion to dental enamel are amongst the most common clinically important features of Streptococcus mutans that causes oral cavities and tooth decay (Choi et al. 2001).
Biofilms are microbes bound together in an extracellular matrix of polymeric substances. These are simply characterized by irreversibly attached bacterial cells to any surface or to each other (Davey and O'toole 2000). The morphology and physiological characteristics of biofilm producing microorganisms adapt resistance up to 1000 times against antimicrobial agents in contrast to their planktonic counterparts (Mah et al. 2003). Various pathogens tend to produce biofilms on food and/or storage surfaces, while some pathogenic bacteria such as Methicillin resistant Staphylococcus aureus (MRSA), Escherichia coli, Klebsiella pneumoniae and Pseudomonas species contribute majorly in nosocomial infections in hospitals on implanted medical devices, especially catheter related urinary tract infections (CAUTIs) (Bryers 2008). According to reports from national institute of health (NIH), biofilm infections are suggested to exceed by 60% in the developed world alone (Lewis 2001). Therefore, it is imperative to develop novel antibiofilm agents for inhibition and eradication of already formed biofilms. Recently, metal and metal oxide nanoparticles have been found to be a useful alternate to inhibit microbial growth and prevent biofilm formation (Dhandapani et al. 2012; Mu et al. 2016). Several antibiotics coated with metallic nanoparticles have recently been reported to produce interesting antibiofilm and antibacterial effects (Ahmed et al. 2016; Singh et al. 2014).
Inorganic nanoparticles are well known to interact with microorganisms and thus act as antibacterial and antifungal agents (Rai et al. 2009; Taglietti et al. 2014). Gold nanoparticles have presented tremendous applications in almost every field of science especially in biology. Grace and Pandian have reported the use of gold nanoparticles as a carrier of aminoglycosidic antibiotics like streptomycin, gentamycin and neomycin. Their results demonstrated high efficacy of gold nanocomposites of these drugs against various Gram-negative and Gram-positive bacteria (Grace and Pandian 2007). Nanoparticles interaction with bacteria is dependent on various factors including the size, morphology and coating of the nanoparticles. The antibacterial activity of the nanoparticles has been found to alter vastly with modification in size of nanoparticles (Boda et al. 2015; Martinez-Castanon et al. 2008).
The aim of this study was to determine the influence of particle size of gold nanomaterials capped with 3-(diphenylphosphino)propionic acid on bacterial biofilms of S. aureus (ATCC 43300) and S. mutans (ATCC 25175). AFM studies were also carried out to study the morphological changes occurred after treatment of small and large size gold nanomaterials.
All chemicals and solvents were used as purchased without any purification or pretreatment unless stated otherwise. Tetrachloroauric acid (HAuCl4), chloro(triphenylphosphine)gold(I) {Au(PPh3)Cl}, borane-t-butylamine complex (BTBC), sodium borohydride (NaBH4) and ligand 3-(diphenylphosphino)propionic acid (LPa) were purchased from Sigma-Aldrich (St. Louis, USA). All the solvents used were either HPLC or analytical grade, and were de-aerated according to standard protocols prior to use.
Synthesis of Au-LPa large
Synthesis of Au-LPa nanoparticles involves the reduction of HAuCl4 by NaBH4 in the presence of LPa as stabilizer. Briefly, NaBH4 (0.015 mg in 0.1 mL deionized water) was added to a vigorously stirred HAuCl4 solution (3.94 mg in 95 mL deionized water) and LPa (0.143 mg in 5 mL deionized water and 0.5 mL of methanol) at room temperature. A reddish pink color solution was formed, showing a surface plasmon resonance (SPR) absorption peak at 524 nm confirms the formation of gold nanoparticle. The colloidal suspension was subjected to centrifugation at 10,000 rpm to separate the nanoparticles from unreacted reagents especially toxic reducing agent and salt produced. Nanoparticles pellet was obtained as a result of centrifugation and was then re-suspended in water.
Synthesis of Au-LPa small
The synthetic procedure used for Au-LPa nanoclusters was as previously described (Woodworth et al. 2016). Precisely, the mixture of 1 mol equiv. of Au(PPh3)Cl and phosphine ligand LPa was taken in a de-aerated methanol in an oven dried Schlenk flask under argon atmosphere. Vigorous stirring was continued and BTBC was added with 5 mol equiv. as compared to the gold precursor. Following this, the reaction mixture turned to orange within 1 h. Further stirring was continued for 4 h to make sure the reaction was completed. Products obtained were characterized by UV–visible spectrophotometry, and TEM. Nanoclusters were also subjected to the same post synthesis treatment as described above.
Synthesis of gold nanomaterials control (unprotected)
Both small and large gold nanomaterials alone used as controls were synthesized as described above and at similar concentrations in the absence of stabilizing agent LPa.
Biofilm inhibition potential of compounds
Antibiofilm activities of the Au-LPa nanomaterials were screened against the two bacterial strains using crystal violet method as described previously (Ahmed et al. 2016). Biofilm inhibition was evaluated by using following equation:
$$ \% {\text{ biofilm inhibition }} = \left\{ {\left( {{\text{O.D. in control }}-{\text{O.D. of test}}}\right)/{\text{O.D. in control}}} \right\}\times 100 $$
Biofilms imaging via atomic force microscope (AFM)
The antibiofilm potential of test samples against S. mutans (ATCC 25175) and S. aureus (ATCC 43300) was further studied by atomic force microscopic images analysis. Nanomaterials at concentrations 0.0025 mM were used for AFM analysis. Biofilm formation was carried out in 24-well plate containing 8 mm circular cover slips. After incubation the cover slips were heat fixed and scanned by atomic force microscope (Agilent 5500). ACAFM mode was used with triangular shape silicon nitride soft cantilever (Veeco, model MLCT-AUHW) for AFM analysis.
Characterization of Au-LPa nanomaterials
Gold nanomaterials formation by reduction of metal ion in the presence of LPa was determined using UV–visible spectroscopy (Fig. 1). Au-LPa nanoparticles revealed absorbance maxima at 524 nm which is the characteristic absorbance for un-aggregated colloidal suspension of gold nanoparticles. While the orange color and absorption maxima at lower wavelength for gold nanomaterials correspond to the smaller particle size, particularly less than 2 nm which is in agreement with our previously reported method for the size selective synthesis of gold nanoclusters (Woodworth et al. 2016).
UV–visible spectrum of Au-LPa nanomaterials. a Au-LPa large show surface plasmon resonance band at 524 nm. b Au-LPa small show characteristic absorption signal at 418 nm
Size determination of the Au-LPa nanomaterials
Atomic force microscopic studies of Au-LPa large were performed through tapping mode atomic force microscopy in order to determine the particle size and morphology of these nanoparticles. As depicted in Fig. 2, the nanoparticles were poly-dispersed, spherical in shape and their size ranged from 9 to 90 nm with an average size of 50 nm. While Au-LPa small were prepared by our previously optimized size selective synthesis ensuring the particle size in the range of 2–3 nm as shown in the TEM image (Fig. 2c).
Size and morphology of Au-LPa nanomaterials. a AFM image of Au-LPa large. b Size distribution histogram of Au-LPa large. c TEM image of Au-LPa small
FT-IR analysis
FT-IR analysis of Au-LPa and LPa alone was also carried out in order to determine the functionalities found in ligand that are responsible for the stabilization of nanomaterials (Fig. 3). Both the phosphine and carboxylic acid functionalities were found to be responsible for capping gold nanomaterials. The FT-IR spectrum of LPa shows four distinct absorption bands that appear at 3406 cm−1 which corresponds to the OH group, another appears at 2922 cm−1 because of CH stretching vibration. Carbonyl stretching vibration appears at 1720 cm−1 and Phosphorous Carbon bond vibration appears at 1407 cm−1. Following nanoparticles formation, distinct changes were visible in the FT-IR spectrum. An absorption band appearing at 3406 cm−1 shifted to 3419 cm−1, and a bend appeared due to carbonyl group vibration moved to 1633 cm−1. Bands appearing due to phosphorus and carbon bond stretching shifted to 1384 cm−1 and became more intense which suggests the stabilizing interaction of phosphine group with gold nanomaterials. These results are also consistent with previous reports, as phosphines and hydroxyl groups have been extensively used for metal nanoparticles preparation with narrow size distribution and higher stability (Wu et al. 2006).
FT-IR spectra of (a) ligand LPa, (b) Au-LPa large nanomaterials
Stability of Au-LPa nanomaterials
pH effects on the stability of Au-LPa were also evaluated to test the robustness of the synthetic procedure. Figure 4a shows that nanoparticles were found to be stable in a pH range from 1 to 10 as evident from the UV–visible spectra since there is no effect of change in pH on nanoparticle absorption intensity as well as on its position and no aggregation was noticed in the whole pH range.
a UV–Visible spectra showing no effect of pH on Au-LPa nanomaterials stability. b UV–visible spectrum for temperature tolerance of Au-LPa
Heat effects on the stability of Au-LPa were also evaluated as shown in Fig. 4b, and it was found that nanoparticles were stable upon heating up to 100 °C. However, a slight change in peak intensity of the nanoparticles was observed while the peak position remains the same with a little peak broadening which may correspond to a tiny proportion of particle aggregation. Hence Au-LPa nanomaterials were found to tolerate the alteration in the pH and temperature conditions, which suggests that they are quite stable and can be utilized in a vast variety of applications.
Antibiofilm activity
% inhibition was determined for LPa solution and Au-LPa large and small against S. aureus (ATCC 43300) and S. mutans (ATCC 25175) biofilms using the crystal violet method. The biofilms were treated with four concentrations of all test samples at a level of 0.025, 0.0125, 0.00625, 0.003125 mM for 24 h. Results of the antibiofilm assay are presented in Fig. 5. Organic LPa ligand did not exert significant antibiofilm activity, however interestingly when conjugated with gold nanomaterials, significant effects were observed in antibiofilm potential. Moreover, Au-LPa small were found to be more efficient in inhibiting the biofilms of both tested Gram-positive bacteria as compared to their large counterparts. Proper controls for both small and large gold nanomaterials were also tested to negate false positive results, but both controls are found to be inactive or only slightly active at similar concentrations as compared to Au-LPa.
Percent biofilm inhibition of Au-LPa (small and large size) against (a, b) S. aureus; (c, d) S. mutans. Statistical significance of the data was represented as *p < 0.05, **p < 0.01 and ***p < 0.001. Whereas * represent significance as compared to LPa and # represent as compared to gold nanomaterials alone (small or large size)
Atomic force microscopy analysis for the imaging of biofilms
Biofilm inhibitory potential of the synthesized gold nanomaterials (small and large) of LPa was further complimented with atomic force microscopic analysis. Biofilm inhibition was observed after treating the cells with 0.0025 mM concentration of the synthesized nanoparticle, LPa, gold nanomaterials alone (small and large) and compared with controls. In the case of S. aureus Au-LPa small completely inhibited biofilm formation, Au-LPa large showed partial inhibition whereas gold nanomaterials (small and large) and LPa alone failed to inhibit biofilm formation (Fig. 6a–f). On the contrary, Au-LPa small and large completely inhibited the biofilm formation of S. mutans as compared to gold nanomaterials (small and large) and LPa alone (Fig. 7a–f).
AFM topographic images of S. aureus biofilms; a biofilm untreated; b Au-LPa small treated; c Au-LPa large treated; d gold nanomaterials large control treated; e gold nanomaterials small control treated; f LPa alone treated
AFM topographic images of S. mutans biofilms; a biofilm untreated; b Au-LPa small treated; c Au-LPa large treated; d gold nanomaterials large control treated; e gold nanomaterials small control treated; f LPa alone treated
After reduction of gold salts with corresponding reducing agents in the presence of LPa, Au-LPa nanomaterials were fully characterized by UV–visible, FT-IR, AFM, and TEM. Both large and small Au-LPa nanomaterials revealed characteristic surface plasmon resonance band corresponding to their sizes as supported by previous literature.
After the successful synthesis and characterization of both Au-LPa nanomaterials, these were subjected to antibiofilms assay via the crystal violet method against S. aureus and S. mutans. The results suggest that LPa alone did not inhibit the formation of both bacteria biofilms to a significant level at the tested concentrations. In particular, it had negative effect on the inhibition of S. mutans biofilms, while minimal positive effects on S. aureus biofilms were observed (21.18% inhibition only at high dose of 0.025 mM). Au-LPa nanomaterials on the other hand showed high tendency to inhibit the biofilm formation of both bacteria. Au-LPa small displayed 95–96% inhibition against both bacteria even at low concentration (0.003125 mM), and had no change in increasing the sample's concentration. Contrary to Au-LPa small, Au-LPa large inhibited S. mutans to great extent even at lower concentrations but showed dose dependent response towards inhibiting S. aureus biofilms (54–84% inhibition at 0.003125–0.025 mM). To nullify the false positive results from Au-LPa nanomaterials, effect of gold controls (unprotected) was also evaluated by similar protocol and parameters. Both gold nanomaterials controls were freshly synthesized as described in "Materials and methods" section and were used without any pretreatment or proper characterization. Both gold nanomaterials controls are found to be ineffective at the concentration levels where Au-LPa nanomaterials showed significant inhibition, however at higher doses both controls showed positive % inhibition against both bacteria. These results can be attributed towards the toxicity of borane reducing agents whose concentration is much higher in the control samples at higher doses, since they were used as prepared. Hence such an effect was expected and completely justifiable.
The higher activity and lower selectivity of Au-LPa small as compared to Au-LPa large may be attributed to the impact of smaller size. Since the smaller size of nanoclusters guarantees the delivery of therapeutic agent inside the cells by overcoming the membrane barriers more effectively and as a result causes better activity (Martinez-Castanon et al. 2008).
AFM analysis of the antibiofilms assay further provided a clear picture of the mode of action of these test samples. After the biofilm formation, the AFM images of control samples indicated prominent and integrated biofilm surface topology of both the tested bacterial strains, i.e. S. aureus, S. mutans (Figs. 6a, 7a respectively). Figures 6b and 7b shows that the biofilms of both bacteria treated with Au-LPa small were completely diminished as was expected from the high % inhibition, hence no signs of bacteria were found in the Au-LPa small treated samples. The Au-LPa large treated samples displayed reduced biofilm as compared to controls, but significant presence of partially depleted biofilm can be observed in Fig. 6c for S. aureus as large nanomaterials have less efficiency to penetrate the compact biofilm. However, Au-LPa large selectively disintegrated S. mutans biofilms as compared to S. aureus (Fig. 7c), which was forecasted in the assay results (Fig. 5). It is suggested in the recent literature that antibiofilms activity of nanoparticles might be the result of nanoparticle approach the biofilm resident through water channel (Stewart 2003). Biofilms samples were then also treated with the unprotected gold nanomaterials (both large and small) to check their antibiofilm effect and validate the results of Au-LPa nanomaterials. Figures 6d, e and 7d, e represent gold controls treated biofilm images, and it is clearly evident from these images that at lower concentrations gold controls have no antibiofilm potential. Same was the case with LPa alone, as it induced no damages to the compactness of biofilms. So based on above discussion it is safe to conclude that the conjugation of gold enhanced the antibiofilm efficacy of LPa, while small Au-LPa nanomaterials are found to be more effective against both Gram-positive bacteria biofilms as compared to larger nanoparticles.
Since the potential of any antimicrobial agent to permeate biofilms and to disperse it is of great importance, and nanomaterials have provided a great deal of feasibility to overcome this problem more recently. Thus, our results demonstrate that Au-LPa nanomaterials can be a useful candidate to disrupt Gram-positive bacteria biofilms with several advantages over the antibiotics approach. Phosphines are a newer class of compounds which are not being used quite extensively for antibacterial purpose, hence there is less chance for bacterial resistance. However, phosphine gas has been used in fumigation for pest control purposes due to its known toxicity.
In conclusion, gold nanomaterials of different sizes (2–3 nm and 50 nm) coated with LPa were prepared via two different chemical methodologies. Au-LPa nanomaterials showed enhanced in vitro inhibition of biofilms of Gram-positive bacteria S. aureus (ATCC 43300) and S. mutans (ATCC 25175) as compared to the same concentration of LPa alone (0.0025 mM). This clearly indicates that gold nanomaterials conjugation with labile phosphine ligands can display antibiofilm effects. The results of the antibiofilm assay and AFM analysis demonstrated that Au-LPa small nanomaterials inhibited the biofilms more efficiently in comparison to Au-LPa large. Due to the important proposition of Gram-positive bacteria biofilms in human pathogenesis, and their antibacterial resistance, this method provides a useful candidate of interest for the applications in nanobiotechnology. However, more studies including evaluation of the toxicity profile of Au-LPa nanomaterials and their mechanism of action are our future plans.
LPa:
3-(diphenylphosphino)propionic acid
Au-LPa:
gold nanomaterials conjugated with 3-(diphenylphosphino)propionic acid
Au (S):
gold nanoparticles small
Au (L):
gold nanoparticles large
AFM:
atomic force microscope
TEM:
transmission electron microscope
UV–Vis:
ultraviolet–visible
FT-IR:
Fourier transform infrared
BTBC:
borane-t-butylamine complex
Ahmed A, Khan AK, Anwar A, Ali SA, Shah MR (2016) Biofilm inhibitory effect of chlorhexidine conjugated gold nanoparticles against Klebsiella pneumoniae. Microb Pathog 98:50–56
Boda SK, Broda J, Schiefer F, Weber-Heynemann J, Hoss M, Simon U, Basu B, Jahnen-Dechent W (2015) Cytotoxicity of ultrasmall gold nanoparticles on planktonic and biofilm encapsulated Gram-positive Staphylococci. Small 11:3183–3193
Bryers JD (2008) Medical biofilms. Biotechnol Bioeng 100:1–18
Choi B-K, Kim K-Y, Yoo Y-J, Oh S-J, Choi J-H, Kim C-Y (2001) In vitro antimicrobial activity of a chitooligosaccharide mixture against Actinobacillus actinomycetemcomitans and Streptococcus mutans. Int J Antimicrob Agents 18:553–557
Chwalibog A, Sawosz E, Hotowy A, Szeliga J, Mitura S, Mitura K, Grodzik M, Orlowski P, Sokolowska A (2010) Visualization of interaction between inorganic nanoparticles and bacteria or fungi. Int J Nanomed 5:1085–1094
Davey ME, O'toole GA (2000) Microbial biofilms: from ecology to molecular genetics. Microbiol Mol Biol Rev 64(4):847–867
Dhandapani P, Maruthamuthu S, Rajagopal G (2012) Bio-mediated synthesis of TiO2 nanoparticles and its photocatalytic effect on aquatic biofilm. J Photochem Photobiol B Biol 110:43–49
Emerich DF, Thanos CG (2003) Nanotechnology and medicine. Expert Opin Biol Ther 3:655–663
Franci G, Falanga A, Galdiero S, Palomba L, Rai M, Morelli G, Galdiero M (2015) Silver nanoparticles as potential antibacterial agents. Molecules 20:8856–8874
Geethalakshmi R, Sarada D (2013) Characterization and antimicrobial activity of gold and silver nanoparticles synthesized using saponin isolated from Trianthema decandra L. Ind Crops Prod 51:107–115
Grace AN, Pandian K (2007) Antibacterial efficacy of aminoglycosidic antibiotics protected gold nanoparticles—a brief study. Colloids Surf A Physicochem Eng Asp 297:63–70
Lewis K (2001) Riddle of biofilm resistance. Antimicrob Agents Chemother 45:999–1007
Mah T-F, Pitts B, Pellock B, Walker GC, Stewart PS, O'Toole GA (2003) A genetic basis for Pseudomonas aeruginosa biofilm antibiotic resistance. Nature 426:306–310
Martinez-Castanon G, Nino-Martinez N, Martinez-Gutierrez F, Martinez-Mendoza J, Ruiz F (2008) Synthesis and antibacterial activity of silver nanoparticles with different sizes. J Nanopart Res 10:1343–1348
Mu H, Tang J, Liu Q, Sun C, Wang T, Duan J (2016) Potent antibacterial nanoparticles against biofilm and intracellular bacteria. Sci Rep 6:18877
Rai M, Yadav A, Gade A (2009) Silver nanoparticles as a new generation of antimicrobials. Biotechn Adv 27:76–83
Singh B, Vuddanda PR, Vijayakumar M, Kumar V, Saxena PS, Singh S (2014) Cefuroxime axetil loaded solid lipid nanoparticles for enhanced activity against S. aureus biofilm. Colloids Surf B Biointerfaces 121:92–98
Stewart PS (2003) Diffusion in biofilms. J Bacteriol 185:1485–1491
Taglietti A, Arciola CR, D'Agostino A, Dacarro G, Montanaro L, Campoccia D, Cucca L, Vercellino M, Poggi A, Pallavicini P (2014) Antibiofilm activity of a monolayer of silver nanoparticles anchored to an amino-silanized glass surface. Biomaterials 35:1779–1788
Woodworth PH, Bertino MF, Ahmed A, Anwar A, Shah MR, Wijesinghe DS, Pettibone JM (2016) Synthesis of gold clusters with flexible and rigid diphosphine ligands and the effect of spacer and solvent on the size selectivity. Nano-Struct Nano-Objects 7:32–40
Wu L, Li B-L, Huang Y-Y, Zhou H-F, He Y-M, Fan Q-H (2006) Phosphine dendrimer-stabilized palladium nanoparticles, a highly active and recyclable catalyst for the Suzuki–Miyaura reaction and hydrogenation. Org Lett 8:3605–3608
DA and AA synthesized and characterized the materials, AKK performed microbiological experiments, AA and MRS developed the idea, and NAK writing of article and guidance in the study. All authors read and approved the final manuscript.
Authors acknowledge Higher Education Commission (HEC) of Pakistan and Sunway University for financial support.
All the relevant data and materials are presented in the manuscript, and there are no additional files.
Ethical approval and consent to participate
This article does not contain any studies with human participants or animals performed by authors.
Authors are thankful of Higher Education Commission (HEC) of Pakistan and Sunway University for funding.
Dania Ahmed and Ayaz Anwar contributed equally to this work
H.E.J. Research Institute of Chemistry, International Center for Chemical and Biological Sciences, University of Karachi, Karachi, 75270, Pakistan
Dania Ahmed
, Ayaz Anwar
& Muhammad Raza Shah
Department of Biological Sciences, School of Science and Technology, Sunway University, 47500, Subang Jaya, Selangor, Malaysia
Ayaz Anwar
& Naveed Ahmed Khan
Dr. Panjwani Center for Molecular Medicine and Drug Research, International Center for Chemical and Biological Sciences, University of Karachi, Karachi, 75270, Pakistan
Anum Khalid Khan
& Ayaz Ahmed
Search for Dania Ahmed in:
Search for Ayaz Anwar in:
Search for Anum Khalid Khan in:
Search for Ayaz Ahmed in:
Search for Muhammad Raza Shah in:
Search for Naveed Ahmed Khan in:
Correspondence to Ayaz Anwar or Ayaz Ahmed.
Ahmed, D., Anwar, A., Khan, A.K. et al. Size selectivity in antibiofilm activity of 3-(diphenylphosphino)propanoic acid coated gold nanomaterials against Gram-positive Staphylococcus aureus and Streptococcus mutans . AMB Expr 7, 210 (2017) doi:10.1186/s13568-017-0515-x
Gold nanomaterials
3-(diphenylphosphino)propanoic acid
S. mutans | CommonCrawl |
\begin{definition}[Definition:Eisenstein Integer]
An '''Eisenstein integer''' is a complex number of the form
:$a + b \omega$
where $a$ and $b$ are both integers and:
:$\omega = e^{2 \pi i / 3} = \dfrac 1 2 \paren {i \sqrt 3 - 1}$
that is, the (complex) cube roots of unity.
The set of all '''Eisenstein integers''' can be denoted $\Z \sqbrk \omega$:
:$\Z \sqbrk \omega = \set {a + b \omega: a, b \in \Z}$
\end{definition} | ProofWiki |
Blow-up for the 3-dimensional axially symmetric harmonic map flow into $ S^2 $
Regularity results for the equation $ u_{11}u_{22} = 1 $
December 2019, 39(12): 6877-6912. doi: 10.3934/dcds.2019236
Soap films with gravity and almost-minimal surfaces
Francesco Maggi 1, , Salvatore Stuvard 1, and Antonello Scardicchio 2,
Department of Mathematics, The University of Texas at Austin, 2515 Speedway STOP C1200, Austin, TX 78712, USA
International Centre for Theoretical Physics, Strada Costiera 11, Trieste 34151, Italy
Received July 2018 Revised January 2019 Published June 2019
Fund Project: F. M. and S. S. have been supported by NSF Grants DMS-1565354, DMS-1361122 and DMS-1262411.
Full Text(HTML)
Figure(6)
Motivated by the study of the equilibrium equations for a soap film hanging from a wire frame, we prove a compactness theorem for surfaces with asymptotically vanishing mean curvature and fixed or converging boundaries. In particular, we obtain sufficient geometric conditions for the minimal surfaces spanned by a given boundary to represent all the possible limits of sequences of almost-minimal surfaces. Finally, we provide some sharp quantitative estimates on the distance of an almost-minimal surface from its limit minimal surface.
Keywords: Minimal surfaces, Plateau problem, capillarity theory, integral currents, integral varifolds.
Mathematics Subject Classification: Primary: 49Q05, 49Q15; Secondary: 53A10.
Citation: Francesco Maggi, Salvatore Stuvard, Antonello Scardicchio. Soap films with gravity and almost-minimal surfaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 6877-6912. doi: 10.3934/dcds.2019236
W. K. Allard, On the first variation of a varifold, Ann. Math., 95 (1972), 417-491. doi: 10.2307/1970868. Google Scholar
W. K. Allard, On the first variation of a varifold: boundary behaviour, Ann. Math., 101 (1975), 418-446. doi: 10.2307/1970934. Google Scholar
S. Amato, G. Bellettini and M. Paolini, Constrained BV functions on covering spaces for minimal networks and Plateau's type problems, Adv. Calc. Var., 10 (2017), 25-47. doi: 10.1515/acv-2015-0021. Google Scholar
H. Brezis and J.-M. Coron, Multiple solutions of $H$-systems and Rellich's conjecture, Comm. Pure Appl. Math., 37 (1984), 149-187. doi: 10.1002/cpa.3160370202. Google Scholar
M. Cicalese, G. P. Leonardi and F. Maggi, Improved convergence theorems for bubble clusters I. The planar case, Indiana Univ. Math. J., 65 (2016), 1979-2050. doi: 10.1512/iumj.2016.65.5932. Google Scholar
G. Ciraolo and F. Maggi, On the shape of compact hypersurfaces with almost-constant mean curvature, Comm. Pure Appl. Math., 70 (2017), 665-716. doi: 10.1002/cpa.21683. Google Scholar
C. Cohen, B. Darbois Texier, E. Reyssat, J. H. Snoeijer, D. Quéré and C. Clanet, On the shape of giant soap bubbles, Proceedings of the National Academy of Sciences, 114 (2017), 2515-2519. doi: 10.1073/pnas.1616904114. Google Scholar
G. David, Should we solve Plateau's problem again?, in Advances in analysis: The legacy of Elias M. Stein, Princeton Math. Ser., 50, Princeton Univ. Press, Princeton, NJ, 2014,108-145. Google Scholar
P.-G. de Gennes, F. Brochard-Wyart and D. Quéré, Capillarity and Wetting Phenomena, Translated by A. Reisinger, Springer, 2003. doi: 10.1007/978-0-387-21656-0. Google Scholar
C. De Lellis, A. De Rosa and F. Ghiraldin, A direct approach to the anisotropic plateau problem, Adv. Calc. Var. doi: 10.1515/acv-2016-0057. Google Scholar
C. De Lellis, F. Ghiraldin and F. Maggi, A direct approach to Plateau's problem, J. Eur. Math. Soc. (JEMS), 19 (2017), 2219-2240. doi: 10.4171/JEMS/716. Google Scholar
C. De Lellis and J. Ramic, Min-max theory for minimal hypersurfaces with boundary, preprint, arXiv: 1611.00926, to appear in Jour. Ann. Inst. Fourier. Google Scholar
G. De Philippis, A. De Rosa and F. Ghiraldin, A direct approach to Plateau's problem in any codimension, Adv. Math., 288 (2016), 59-80. doi: 10.1016/j.aim.2015.10.007. Google Scholar
G. De Philippis and F. Maggi, Sharp stability inequalities for the Plateau problem, J. Differential Geom., 96 (2014), 399-456. doi: 10.4310/jdg/1395321846. Google Scholar
A. De Rosa, Minimization of anisotropic energies in classes of rectifiable varifolds, SIAM J. Math. Anal., 50 (2018), 162-181. doi: 10.1137/17M1112479. Google Scholar
R. Defay and I. Prigogine, Surface Tension and Adsorption, Translated by D. G. Everett, John Wiley and sons, Inc., New York, NY, 1966. Google Scholar
M. G. Delgadino and F. Maggi, Alexandrov's theorem revisited, preprint, arXiv: 1711.07690. doi: 10.2140/apde.2019.12.1613. Google Scholar
M. G. Delgadino, F. Maggi, C. Mihaila and R. Neumayer, Bubbling with $L^2$-almost constant mean curvature and an Alexandrov-type theorem for crystals, Arch. Ration. Mech. Anal., 230 (2018), 1131-1177. doi: 10.1007/s00205-018-1267-8. Google Scholar
F. Duzaar and M. Fuchs, On the existence of integral currents with prescribed mean curvature vector, Manuscripta Math., 67 (1990), 41-67. doi: 10.1007/BF02568422. Google Scholar
F. Duzaar and M. Fuchs, A general existence theorem for integral currents with prescribed mean curvature form, Boll. Un. Mat. Ital. B (7), 6 (1992), 901-912. Google Scholar
Y. Fang and S. Kolasinski, Existence of solutions to a general geometric elliptic variational problem, Calc. Var. Partial Differential Equations, 57 (2018), 91. doi: 10.1007/s00526-018-1348-4. Google Scholar
H. Federer and W. H. Fleming, Normal and integral currents, Ann. of Math. (2), 72 (1960), 458-520. doi: 10.2307/1970227. Google Scholar
A. Figalli and F. Maggi, On the shape of liquid drops and crystals in the small mass regime, Arch. Rat. Mech. Anal., 201 (2011), 143-207. doi: 10.1007/s00205-010-0383-x. Google Scholar
C. F. Gauss, Principia generalia theoriae figurae fluidorum, Comment. Soc. Regiae Scient. Gottingensis Rec. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, Springer, Berlin; New York, 1998. Google Scholar
G. G. Giusteri, L. Lussardi and E. Fried, Solution of the Kirchhoff-Plateau problem, J. Nonlinear Sci., 27 (2017), 1043-1063. doi: 10.1007/s00332-017-9359-4. Google Scholar
E. Giusti, Direct Methods in the Calculus of Variations, World Scientific Publishing Co., Inc., River Edge, NJ, 2003. doi: 10.1142/9789812795557. Google Scholar
J. Harrison, On Plateau's problem for soap films with a bound on energy, J. Geom. Anal., 14 (2004), 319-329. doi: 10.1007/BF02922075. Google Scholar
J. Harrison and H. Pugh, Existence and soap film regularity of solutions to Plateau's problem, Adv. Calc. Var., 9 (2016), 357-394. doi: 10.1515/acv-2015-0023. Google Scholar
G. Huisken, Nonparametric mean curvature evolution with boundary conditions, J. Differential Equations, 77 (1989), 369-378. doi: 10.1016/0022-0396(89)90149-6. Google Scholar
B. Krummel and F. Maggi, Isoperimetry with upper mean curvature bounds and sharp stability estimates, Calc. Var. Partial Differential Equations, 56 (2017), Art. 53, 43. doi: 10.1007/s00526-017-1139-3. Google Scholar
P. S. Laplace, Mécanique céleste, 1806, Suppl. 10th volume. Google Scholar
G. P. Leonardi and F. Maggi, Improved convergence theorems for bubble clusters II. The three-dimensional case, Indiana Univ. Math. J., 66 (2017), 559-608. doi: 10.1512/iumj.2017.66.6016. Google Scholar
R. Schätzle, Quadratic tilt-excess decay and strong maximum principle for varifolds, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 3 (2004), 171-231. Google Scholar
L. Simon, Lectures on Geometric Measure Theory, Proceedings of the Centre for Mathematical Analysis, 3, Australian National University, Centre for Mathematical Analysis, Canberra, 1983. Google Scholar
J. Spruck, Interior gradient estimates and existence theorems for constant mean curvature graphs in $M^n\times\bf R$, Pure Appl. Math. Q., 3 (2007), 785-800. doi: 10.4310/PAMQ.2007.v3.n3.a6. Google Scholar
M. Struwe, A global compactness result for elliptic boundary value problems involving limiting nonlinearities, Math. Z., 187 (1984), 511-517. doi: 10.1007/BF01174186. Google Scholar
B. White, Currents and flat chains associated to varifolds, with an application to mean curvature flow, Duke Math. J., 148 (2009), 41-62. doi: 10.1215/00127094-2009-019. Google Scholar
T. Young, An essay on the cohesion of fluids, Philos. Trans. Roy. Soc. London, 65–87. doi: 10.1098/rspl.1800.0095. Google Scholar
Figure 1. On the left, a boundary $ \Gamma $, consisting of three circles, that is accessible from infinity. The acute wedges realizing the inclusions 3 are depicted by dashed lines. Notice that it is not necessary that $ \Gamma $ is contained into a convex set, or into a mean convex set, for the condition to hold. On the right, another set of circles defining a boundary $ \Gamma $ which does not satisfy accessibility from infinity. Indeed, there is no way to touch the smaller circle with an acute wedge containing the larger ones
Figure Options
Download full-size image
Download as PowerPoint slide
Figure 5. The construction described in Example 7
Figure 4. When Γ consists of two parallel disks there are, in addition to the disconnected surface defined by two disks, four minimal surfaces, two of them singular, all composed by joining pieces of catenoids
Figure 3. Using Gauss' capillarity energy to formulate Plateau's problem. Minimization of $ \sigma\,\mathcal{H}^2(M) $ among surfaces with $ \partial M = \Gamma $ is replaced by minimizing the capillarity energy among regions contained in the complement of a $ \delta $-neighborhood of $ \Gamma $. Equilibrium configurations with volume $ \varepsilon\ll\delta\,\mathcal{H}^2(S)\ll1 $ arise as normal neighborhoods of minimal surfaces spanned by $ \Gamma $. Here $ S $ denotes the boundary of $ E $ away from the wire frame
Figure 2. The derivation of 12, after [16, Section Ⅰ.4]
Figure 6. Bubbling is possible even when $\Gamma$ is accessible from infinity if a weak notion of deficit is used. Here $M_j$ is the surface of revolution obtained by rotating the one-dimensional profile on the right, $B_{\varepsilon _j}(\Gamma_1)$ denotes an $\varepsilon _j$-neighborhood of the circle $\Gamma_1$, and $M_j^*$ is the part of $M_j$ lying outside $B_{\varepsilon _j}(\Gamma_1)$. We take $\varepsilon _j$ such that $M_j$ intersects $\partial B_{\varepsilon _j}(\Gamma_1)$ in three circles, and so that the $H_{M_j}$ is uniformly small on $M_j\setminus M_j^*$. The limit surface counts one copy of $K$, and two copies of the disk filling $\Gamma_1$
Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432
Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109
Yi Zhou, Jianli Liu. The initial-boundary value problem on a strip for the equation of time-like extremal surfaces. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 381-397. doi: 10.3934/dcds.2009.23.381
Tomáš Roubíček. Cahn-Hilliard equation with capillarity in actual deforming configurations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 41-55. doi: 10.3934/dcdss.2020303
Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218
Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055
Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020451
Kung-Ching Chang, Xuefeng Wang, Xie Wu. On the spectral theory of positive operators and PDE applications. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3171-3200. doi: 10.3934/dcds.2020054
Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076
Pierre-Etienne Druet. A theory of generalised solutions for ideal gas mixtures with Maxwell-Stefan diffusion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020458
Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024
Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310
Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051
Luca Battaglia, Francesca Gladiali, Massimo Grossi. Asymptotic behavior of minimal solutions of $ -\Delta u = \lambda f(u) $ as $ \lambda\to-\infty $. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 681-700. doi: 10.3934/dcds.2020293
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, A pricing option approach based on backward stochastic differential equation theory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 969-969. doi: 10.3934/dcdss.2019065
Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253
HTML views (407)
Francesco Maggi Salvatore Stuvard Antonello Scardicchio | CommonCrawl |
\begin{definition}[Definition:Measure Space]
A '''measure space''' is a triple $\struct {X, \Sigma, \mu}$ where:
:$X$ is a set
:$\Sigma$ is a $\sigma$-algebra on $X$
:$\mu$ is a measure on $\Sigma$.
Thus it is a measurable space $\struct {X, \Sigma}$ with a measure.
\end{definition} | ProofWiki |
\begin{document}
\title{ extbf{The number of ends in the uniform spanning tree for recurrent unimodular random graphs.}
\begin{abstract}
We prove that if a unimodular random rooted graph is recurrent, the number of ends of its uniform spanning tree is almost surely equal to the number of ends of the graph. Together with previous results in the transient case, this completely resolves the problem of the number of ends of wired uniform spanning forest components in unimodular random rooted graphs and confirms a conjecture of Aldous and Lyons (2006). \end{abstract}
\section{Introduction} The \textbf{uniform spanning tree} of a finite connected graph $G$ is defined by picking uniformly at random a connected subgraph of $G$ containing all vertices but no cycles. To go from finite to infinite graphs, it is possible to exhaust $G$ by finite subgraphs and take weak limits with appropriate boundary conditions. For two natural such choices of boundary conditions, known as \textbf{free} and \textbf{wired} boundary conditions, Pemantle \cite{Pemantle} proved that these infinite-volume limits are always well-defined independently of the choice of exhaustion, and that the choice of boundary conditions also does not affect the limit obtained when $G=\mathbb{Z}^d$. Since connectivity of a subgraph is not a closed condition, these weak limits might be supported on configurations that are \emph{forests} rather than trees, and indeed Pemantle proved for $\mathbb{Z}^d$ that the limit is connected if and only if $d\leq 4$. For a general infinite, connected, locally finite graph the infinite-volume limit of the UST with free boundary conditions is called the \textbf{free uniform spanning forest} (FUSF) and the infinite volume limit with wired boundary conditions is called the \textbf{wired uniform spanning forest} (WUSF); when the two limits are the same we refer to them simply as the uniform spanning forest (USF). In their highly influential work~\cite{BLPS}, Benjamini, Lyons, Peres and Schramm resolved the connectivity question for the WUSF in large generality: the wired uniform spanning tree is a single tree if and only if two random walks intersect infinitely often. The connectivity of the FUSF appears to be a much more subtle question and, outside of the case that the two forests are the same, is understood only in a few examples \cite{MR4010561,pete2022free,tang2021weights,AHNR}.
For \emph{recurrent} graphs, which are the main topic of this paper, the infinite-volume limit of the UST is always defined independently of boundary conditions and a.s.\ connected \cite[Proposition 5.6]{BLPS}, so that we can unambiguously refer to the uniform spanning tree (UST) of an infinite, connected, locally finite, recurrent graph $G$.
After connectivity, the next most basic topological property of the USF is the number of \textbf{ends} its components have.
Here, we say that a graph has at least $m$ ends whenever there exists some finite set of vertices $W$ such that $G \setminus W$ has at least $m$ infinite connected components. The graph is said to be $m$-ended if at has at least $m$ but not $m + 1$ ends. Understanding the number of ends of the USF turns out to be rather more difficult than connectivity, with a significant literature now devoted to the problem.
For \emph{Cayley graphs}, it follows from abstract principles \cite[Section 3.4]{AHNR} that every component has $1$, $2$, or infinitely many ends almost surely, and for \emph{amenable} Cayley graphs such as $\mathbb{Z}^d$ (for which the WUSF and FUSF always coincide) is follows by a Burton-Keane \cite{BurtonKeane} type argument that every component has either one or two ends almost surely;
see \cite[Chapter 10]{LyonsPeresProbNetworks} for detailed background.
For the \emph{wired} uniform spanning forest on transitive graphs, a complete solution to the problem was given by Benjamini, Lyons, Peres, and Schramm \cite{BLPS} and Lyons, Morris, and Schramm \cite{LyonsMorrisSchramm2008}, who proved that every component of the WUSF of an infinite transitive graph is one-ended almost surely unless the graph in question is rough-isometric to $\mathbb{Z}$. Before going forward, let us emphasize that the recurrent case of this result \cite[Theorem 10.6]{BLPS} is established using a completely different argument to the transient case, with the tools available for handling the two cases being largely disjoint.
Beyond the transitive setting, various works have established mild conditions under which every component of the WUSF is one-ended almost surely, applying in particular to planar graphs with bounded face degrees \cite{MR4010561} and graphs satisfying isoperimetric conditions only very slightly stronger than transience \cite{MR3773383,LyonsMorrisSchramm2008}. These proofs are quantitative, and recent works studying critical exponents for the USF of $\mathbb{Z}^d$ with $d\geq 3$ \cite{MR4055195,hutchcroft2020logarithmic,MR4348685} and Galton-Watson trees~\cite{MR4095018} can be thought of as a direct continuation of the same line of research.
In parallel to this deterministic theory,
Aldous and Lyons \cite{AldousLyonsUnimod2007} observed that the methods of \cite{BLPS} also apply to prove that the WUSF has one-ended components on any transient \emph{unimodular random rooted graph} of bounded degree, and the second author later gave new proofs of this result with different methods that removed the bounded degree assumption \cite{Hutchcroft2,MR3773383}. It is also proven in \cite{MR3651050,MR3813990} that every component of the \emph{free} uniform spanning forest of a unimodular random rooted graph is infinitely ended a.s.\ whenever the free and wired forests are different.
Here, unimodular random rooted graphs comprise a very large class of random graph models including Benjamini-Schramm limits of finite graphs \cite{BenjaminiSchrammRecurrence}, Cayley graphs, and (suitable versions of) Galton-Watson trees, as well as e.g.\ percolation clusters on such graphs; See Section~\ref{subsec:definitions} for definitions and e.g.\ \cite{CurNotes,AldousLyonsUnimod2007} for detailed background.
The aforementioned works \cite{AldousLyonsUnimod2007,Hutchcroft2,MR3773383,MR3651050,MR3813990} completely resolved the problem of the number of ends of the WUSF and FUSF for \emph{transient} unimodular random rooted graphs, but the recurrent case remained open.
Besides the fact that the transient methods do not apply, a further complication of the recurrent case is that it is possible for the UST to be either one-ended or two-ended according to the geometry of the graph: indeed, the UST of $\mathbb{Z}^2$ is one-ended while the UST of $\mathbb{Z}$ is two-ended.
Aldous and Lyons conjectured \cite[p.~1485]{AldousLyonsUnimod2007} that the dependence of the number of ends of the UST on the geometry of the graph is as simple as possible: The UST of a recurrent unimodular random rooted graph is one-ended if and only if the graph is. The fact that two-ended unimodular random rooted graphs have two-ended USTs is trivial; the content of the conjecture is that one-ended unimodular random rooted graphs have one-ended USTs. Previously, the conjecture was resolved under the assumption of planarity in \cite{AHNR}, while in \cite{BvE21} it was proved (without using the planarity assumption) that the UST of a recurrent unimodular random rooted graph is one-ended precisely when the ``harmonic measure from infinity’’ is uniquely defined.
In this paper we resolve the conjecture.
\begin{theorem} \label{T:main}
Let $(G, o)$ be a recurrent unimodular random rooted graph and let $T$ be the uniform spanning tree of $G$. Then $T$ has the same number of ends as $G$ a.s. \end{theorem}
To see that the theorem is not true without unimodularity,
consider taking the line graph $\mathbb{Z}$ and adding a path of length $2^n$ connecting $-n$ connecting to $n$ for each $n$, making the graph one-ended. Kirchoff's effective resistance formula implies that the probability that the additional path connecting $-n$ to $n$ is included in the UST is at most $n/(2^n+n)$, and a simple Borel-Cantelli argument implies that the UST is two-ended almost surely. Similar examples show that Theorem~\ref{T:main} does not apply to unimodular random rooted \emph{networks}, since we can use edges of very low conductance to make the network one-ended while having very little effect on the geometry of the UST.
\textbf{About the proof.} We stress again that the tools used in the transient case do not apply at all to the recurrent case, and we are forced to use completely different methods that are specific to the recurrent case. We build on \cite{BvE21} which proved that the ``harmonic measures from infinity'' are uniquely defined if and only if the uniform spanning tree is one-ended; A self-contained treatment of (a slight generalization of) the results of \cite{BvE21} that we will need is given in Appendix~\ref{Appendix:BvE}. The set of harmonic measures from infinity can be thought of as a ``boundary at infinity'' for the graph, analogously to the way the Martin boundary is used in transient graphs. It is implicit in \cite{BvE21} that these measures correspond to the ways in which a random walk ``conditioned to never return to the root'' can escape to infinity. We develop these ideas further in Section~\ref{sec:boundary_theory}, in which we make this connection precise. We then apply these ideas inside an ergodic-theoretic framework to prove that \textit{if} the UST has two ends, then the effective resistance must grow linearly along the unique bi-infinite path in the tree, which implies in particular that graph distances must also grow linearly. To conclude, we argue that this can only happen when the graph has linear volume growth, which is known to be equivalent to two-endedness for unimodular random rooted graphs \cite{BenHutch,bowen2021perfect}.
\paragraph{Acknowledgments} DvE wishes to thank Nathana\"el Berestycki for many useful discussions and comments. Most of the research was carried out while DvE visited TH at Caltech and we are grateful for the hospitality of the institute. DvE is supported by the FWF grant P33083, “Scaling limits in random conformal geometry”.
\section{Boundary theory of recurrent graphs} \label{sec:boundary_theory}
In this section we develop the theory of harmonic measures from infinity on recurrent graphs, their associated potential kernels and Doob transforms, and how this relates to the spanning tree. Much of the theory we develop here is a direct analogue for recurrent graphs of the theory of Martin boundaries of transient graphs \cite{Woess,dynkin1969boundary}. This theory is interesting in its own right, and we were surprised to find how little attention has been paid to these notions outside of some key motivating examples such as $\mathbb{Z}^2$ \cite{popov2020transience,gantert2019range}.
All of the results in this section will concern deterministic infinite, connected, recurrent, locally finite graphs; applications of the theory to unimodular random rooted graphs will be given in Section~\ref{sec:main_proof}.
\subsection{Harmonic measures from infinity}
Let $G=(V,E)$ be an infinite, connected, locally finite, recurrent graph. For each $v\in V$ we write $\mathbf{P}_v$ for the law of the simple random walk on $G$ started at $v$, and for each set $A\subseteq V$ write $T_A$ and $T^+_A$ for the first visit time of the random walk to $A$ and first positive visit time of the random walk to $A$ respectively. Given a probability measure $\mu$ on $V$, we also write $\mathbf{P}_\mu$ for the law of the random walk started at a $\mu$-distributed vertex.
A \textbf{harmonic measure from infinity} $h=(h_B: B\subset V$ finite$)$ on $G$ is a collection of probability measures on $V$ indexed by the finite subsets $B$ of $V$ with the following properties: \begin{enumerate}
\item $h_B$ is supported on $\partial B$ for each $B\subset V$, where $\partial B$ is the set of elements of $B$ that are adjacent to an element of $V\setminus B$.
\item For each pair of finite sets $B \subseteq B'$, $h_B$ and $h_{B'}$ satisfy the consistency condition
\begin{equation}
\label{eq:consistency}
h_B(u) = \sum_{v\in B'} h_{B'}(v)\mathbf{P}_v(X_{T_B}=u)
\end{equation}
for every $u\in B$. \end{enumerate} We denote the space of harmonic measures from infinity by $\c{H}$, which (identifying the measures $h_B$ with their probability mass functions) is a compact convex subset of the space of functions $\{$finite subsets of $V\}\to\mathbb{R}^V$ when equipped with the product topology. As mentioned above, the space $\c{H}$ plays a role for recurrent graphs analogous to that played by the Martin boundary for transient graphs; the analogy will become clearer once we introduce potential kernels in the next subsection. We say that the harmonic measure from infinity is \textbf{uniquely defined} when $\c{H}$ is a singleton.
If $\mu_n$ is a sequence of probability measures on $V$ converging vaguely to the zero measure in the sense that $\mu_n(v)\to 0$ as $n\to\infty$ for each fixed $v\in V$ then any subsequential limit of the collections $(\mathbf{P}_{\mu_n}(X_{T_B} = \cdot )\,:\, B \subset V$ finite$)$ belongs to $\c{H}$, with these collections themselves satisfying every property of a harmonic measure from infinity other than the condition that $h_B$ is supported on $\partial B$ for every finite $B$. (Indeed, the consistency condition \eqref{eq:consistency} follows from the strong Markov property of the random walk.) In fact every harmonic measure from infinity can be written as such a limit.
\begin{lemma} \label{lem:harmonic_measures_as_limits} If $h\in \c{H}$ is a harmonic measure from infinity then there exists a sequence of finitely supported probability measures $(\mu_n)_{n\geq 1}$ on $V$ such that $\mu_n(v)\to 0$ for every $v\in V$ and \begin{equation} \label{eq: HM def}
h_B(\cdot) = \lim_{n \to \infty} \mathbf{P}_{\mu_n}(X_{T_B} = \cdot\, ) \qquad \text{ for every $B\subset V$ finite.} \end{equation} \end{lemma}
\begin{proof} Fix $h\in \c{H}$. Let $V_1 \subset V_2 \subset V_3 \cdots$ be an increasing sequence of finite subsets of $V$ with $\bigcup_i V_i = V$, and for each $n\geq 1$ let $\mu_n=h_{V_n}$. It follows from the consistency condition \eqref{eq:consistency} that \[ h_B(\cdot) = \mathbf{P}_{\mu_n}(X_{T_B} = \cdot\, ) \qquad \text{ for every $B\subset V_n$,} \] and the claim follows since every finite set is eventually contained in $V_n$. \end{proof}
Since $\c{H}$ is a weakly compact subspace of the set of functions from finite subsets of $V$ to $\mathbb{R}^V$, which is a locally convex topological vector space, it is a Choquet-simplex: Every element can be written as a convex combination of the extremal points. In particular, if $\c{H}$ has more than one point then it must have more than one extremal point.
This will be useful to us because extremal points of $\c{H}$ are always limits of harmonic measures from sequences of single vertices. Indeed, identifying each vertex $v\in V$ with the collection of harmonic measures $(\mathbf{P}_v(X_{T_B}=\cdot):B\subset V$ finite$)$ allows us to think of $V \cup \c{H}$ as a compact Polish space containing $V$ (in which $V$ might not be dense), and we say that a sequence of \emph{vertices} $(v_n)_{n\geq 0}$ converges to a point $h\in \c{H}$ if $h_B(\cdot) = \lim_{n \to \infty} \mathbf{P}_{v_n}(X_{T_B} = \cdot \,)$ for every $B\subset V$ finite.
\begin{lemma} \label{L: extremal generators}
If $h \in \c{H}$ is extremal, there exists a sequence of vertices $(v_n)_{n\geq 0}$ such that $v_n$ converges to $h$ as $n\to\infty$.
\end{lemma}
\begin{proof} Let $\c{I}$ be the set of functions $h: \{B \subset V \text{ finite}\}\to \mathbb{R}^V$ of the form
\[h_B(\cdot) = \mathbf{P}_{\mu}(X_{T_B} = \cdot ) \qquad \text{ for every $B\subset V$ finite}\]
for some finitely supported measure $\mu$ on $V$. Lemma~\ref{lem:harmonic_measures_as_limits} implies that $\overline{\c{I}}=\c{I}\cup\c{H}$ is a compact convex subset of the space of all functions $\{B \subset V \text{ finite}\}\to \mathbb{R}^V$ equipped with the product topology, which is a locally convex topological vector space. By the Krein-Milman theorem, a subset $W$ of $\c{I} \cup \c{H}$ has closure containing the set of extremal points of $\c{I} \cup \c{H}$ if and only if $\c{I} \cup \c{H}$ is contained in the closed convex hull of $W$. Thus, if we define $\mathcal{I}_\mathrm{ext}$ to be the set of functions
$h: \{B \subset V \text{ finite}\}\to \mathbb{R}^V$ of the form
\[h_B(\cdot) = \mathbf{P}_{z}(X_{T_B} = \cdot ) \qquad \text{ for every $B\subset V$ finite}\]
for some $z\in V$ then $\c{I}$ is clearly contained in the convex hull of $\mathcal{I}_\mathrm{ext}$, so that $\c{I}\cup\c{H}$ is contained in the closed convex hull of $\mathcal{I}_\mathrm{ext}$ and, by the Krein-Milman theorem, the set of extremal points of $\c{I}\cup\c{H}$ is contained in the closure of $\mathcal{I}_\mathrm{ext}$.
Now, observe that for any non-trivial convex combination of an element of $\c{I}$ and an element of $\c{H}$, there must exist a finite set of vertices $B$ and a point $z$ in the interior of $B$ (i.e., in $B$ and not adjacent to any element of $V\setminus B$) such that $h_B(z)\neq 0$; indeed, if the element of $\c{I}$ corresponds to some finitely supported measure $\mu$, then any $B$ containing the support of $\mu$ in its interior and any $z$ in the support of $\mu$ will do. Since no element of $\c{H}$ can have this property, it follows that non-trivial convex combinations of elements of $\c{I}$ and $\c{H}$ cannot belong to $\c{H}$ and hence that extremal points of $\c{H}$ are also extremal in $\c{I}\cup\c{H}$. It follows that the set of extremal points of $\c{H}$ is contained in the closure of $\c{I}_\mathrm{ext}$, which is equivalent to the claim.
\end{proof}
\begin{remark} The converse to this lemma is \emph{not} true: A limit of a sequence of Dirac measures need not be extremal. For example, if we construct a graph from $\mathbb{Z}$ by attaching a very long path between $-n$ and $n$ for each $n\geq 1$ and take $z_n$ to be a point in the middle of this path for each $n$, the sequence $(z_n)_{n\geq 1}$ will converge to a non-extremal element of $\c{H}$ that is the convex combination of the limits of $(n)_{n\geq 1}$ and $(-n)_{n\geq 1}$.
\end{remark}
\subsection{Potential kernels and Doob transforms}
The arguments in \cite{BvE21} heavily rely on a correspondence between the harmonic measure from infinity and its \textbf{potential kernel}. One important feature of the potential kernel is that, given a vertex $o\in V$ and a point $h\in\c{H}$, it provides a sensible way to ``condition the random walk to converge to $h$ before returning to $o$". We begin by discussing how conditioning the random walk to hit a particular vertex before returning to $o$ can be described in terms of Doob transforms before developing the analogous limit theory.
\textbf{Doob transforms and non-singular conditioning.} Suppose that we are given two distinct vertices $o$ and $z$ in an infinite, connected, locally finite recurrent graph $G$. Letting $\mathbf{G}_z(x,y)$ be the expected number of times a random walk started at $x$ visits $y$ before hitting $z$, we can compute that the function \[ a(x)=\frac{\mathbf{G}_z(o,o)}{\deg(o)}-\frac{\mathbf{G}_z(x,o)}{\deg(o)} = \frac{\mathbf{P}_x(T_o>T_z)\mathbf{G}_z(o,o)}{\deg(o)} \] is harmonic at every vertex other than $o$ and $z$, and has \[ \Delta a(o) = 0-\deg(o) \mathbf{E}_o[a(X_1)] = -\mathbf{P}_o(T_o^+>T_z)\mathbf{G}_z(o,o) = -1, \] where $\Delta$ denotes the graph Laplacian $\Delta f(x)=\deg(x)f(x)-\sum_{y\sim x} f(y)=\deg(x)\mathbf{E}_x[f(X_0)-f(X_1)]$ (terms in this sum are counted with appropriate multiplicity if there is more than one edge between $x$ and $y$). Moreover, the quantity $a(x)$ is strictly positive at every vertex $x$ that is neither equal to $o$ nor disconnected from $z$ by $o$ in the sense that every path from $x$ to $z$ must pass through $o$.
Observe that the trivial identity \begin{multline} \mathbf{P}_o((X_0,\ldots,X_{n})=(x_0,\ldots,x_n) ) = \prod_{i=1}^n p(x_{i-1},x_i) \\= \frac{1}{a(x_n)} a(x_1)p(o,x_1) \prod_{i=2}^n \frac{a(x_i)}{a(x_{i-1})} p(x_{i-1},x_i)\label{eq:finite_Doob0} \end{multline} holds for every sequence of vertices $x_0,\ldots,x_n$ with $x_0=o$ and $a(x_i)>0$ for every $i>0$. Since $a(z)=G_z(o,o)=\mathbf{P}_o(T_z<T_o^+)^{-1}$ it follows that \begin{equation} \label{eq:finite_Doob} \mathbf{P}_o((X_0,\ldots,X_{n})=(x_0,\ldots,x_n) \mid T_z<T_o^+) = a(x_1)p(o,x_1) \prod_{i=2}^n \frac{a(x_i)}{a(x_{i-1})} p(x_{i-1},x_i) \end{equation} for every sequence of vertices $x_0,\ldots,x_n$ with $x_0=o$, $x_n=z$, and $x_i \notin \{o,z\}$ for every $0<i<n$ (which implies that $a(x_i)>0$ for every $1\leq i \leq n$). Now, the fact that $a$ is harmonic off of $\{o,z\}$ and has $\Delta a(o)=-1$ implies that we can define a stochastic matrix with state space $\{x\in V: x=o$ or $a(x)>0\}$ by \[ \wh{p}^a(x,y)=\begin{cases} \frac{a(y)}{a(x)}p(x,y) & x \notin \{o,z\}\\ a(y) & x =0\\ \mathds{1}(y=z) & x = z, \end{cases} \] and if we define the \textbf{Doob transformed walk} $\wh{X}^a$ to be the Markov chain with this transition matrix started from $o$ then it follows from \eqref{eq:finite_Doob} that $(\wh{X}^a_{n})_{n=0}^{T_z}$ has law equal to the conditional law of the simple random walk $(X_n)_{n=0}^{T_z}$ started at $o$ and conditioned to hit $z$ before returning to $o$. Moreover, letting $\wh{\mathbf{P}}^a_o$ denote the law of $\wh{X}^a$, it follows from the definition of $\wh{X}^a$ that \begin{multline} \wh{\mathbf{P}}^a_o\left ((\wh{X}^a_0,\ldots,\wh{X}^a_n)=(x_0,\ldots,x_n)\right) = \prod_{i=1}^n \wh{p}^a(x_{i-1},x_i)= a(x_{1})\prod_{i=2}^n \frac{a(x_i)}{a(x_{i-1})} p(x_{i-1}, x_i) \\ = a(x_{n})\prod_{i=2}^n p(x_{i-1}, x_i) = \deg(o) a(x_n) \mathbf{P}_o\left((X_0,\ldots,X_n)=(x_0,\ldots,x_n)\right) \label{eq:finite_Doob2} \end{multline} for every sequence $x_0,\ldots,x_n$ with $x_0=o$ and $x_i\notin\{o,z\}$ for every $0<i<n$.
\paragraph{Defining the potential kernel.} We now define the \textbf{potential kernel} $a^h$ associated to a point $h\in \c{H}$ via the formula
\begin{equation} \label{eq: PK def}
a^h(x, y) = h_{x, y}(x) \mathcal{R}_{\mathrm{eff}}(x \leftrightarrow y) \end{equation} where we write $h_{x,y}=h_{\{x,y\}}$, so that $a^h(x,x)=0$ for each $x\in V$. The fact that this is a sensible definition owes largely to the following lemma.
\begin{lemma} \label{lem:PK_harmonic} For each $h\in \c{H}$, the potential kernel $a^h(x, y) = h_{x, y}(x) \mathcal{R}_{\mathrm{eff}}(x \leftrightarrow y)$ satisfies \begin{equation} \label{eq: PK harmonic}
\Delta a^h(\,\cdot\,, y) = -\mathds{1}(\,\cdot=y), \end{equation} so that the potential kernel $a^h(\cdot, y)$ is harmonic away from $y$ and subharmonic at $y$. \end{lemma}
\begin{proof} Since the map $h\mapsto a^h$ is affine and the equality \eqref{eq: PK harmonic} is linear, it suffices to prove the lemma in the case that $h$ is extremal. By Lemma~\ref{L: extremal generators}, there exists a sequence of vertices $(v_n)_{n\geq 1}$ such that $v_n$ converges to $h$. For each $n \geq 1$ we define \[ a^n(x,y)=\frac{\Gr_{v_n}(y, y)}{\deg(y)} - \frac{\Gr_{v_n}(x, y)}{\deg(y)}. \]
and claim that \begin{equation} \label{eq:PK_limit}
a^h(x, y) = \lim_{n \to \infty} a^n(x,y) \end{equation} for every $x,y\in V$.
(Note that this limit formula is often taken as the \emph{definition} of the potential kernel.) We will prove \eqref{eq:PK_limit} with the aid of three standard identities for the Greens function: \begin{enumerate}
\item By the strong Markov property, $\Gr_{z}(x, y)$ is equal to $\mathbf{P}_x(T_y<T_z)\Gr_{z}(y, y)$ for every three distinct vertices $x$, $y$, and $z$.
\item By the strong Markov property, $\Gr_x(y,y)$ is equal to $\mathbf{P}_x(T_y<T_x^+)^{-1}$ for every pair of distinct vertices $x$ and $y$. It follows in particular that $\deg(y)^{-1}\Gr_x(y,y)=\mathcal{R}_{\mathrm{eff}}(x\leftrightarrow y)$ and, since the effective resistance is symmetric in $x$ and $y$, that $\deg(y)^{-1}\Gr_x(y,y) = \deg(x)^{-1}\Gr_y(x,x)$.
\item By time-reversal, $\deg(x)\Gr_{z}(x, y)$ is equal to $\deg(y)\Gr_{z}(y, x)$ for every three distinct vertices $x$, $y$, and $z$. \end{enumerate} Applying these three identities in order yields that
\begin{align*} a^n(x,y)&=\frac{\Gr_{v_n}(y, y)}{\deg(y)}\mathbf{P}_x(T_y> T_{v_n}) =\frac{\Gr_{y}(v_n, v_n)}{\deg(v_n)}\mathbf{P}_x(T_y> T_{v_n}) &= \frac{\Gr_{y}(x, v_n)}{\deg(v_n)} = \frac{\Gr_{y}(v_n, x)}{\deg(x)}
\end{align*} whenever $x$, $y$, and $v_n$ are distinct. Applying the first and second identities a second time then yields that \begin{align} a^n(x,y)
= \mathbf{P}_{v_n}(T_x<T_y)\frac{\Gr_{y}(x, x)}{\deg(x)} = \mathbf{P}_{v_n}(T_x<T_y) \mathcal{R}_{\mathrm{eff}}(x\leftrightarrow y) \end{align} whenever $x$, $y$, and $v_n$ are distinct. This is easily seen to imply the claimed limit formula \eqref{eq:PK_limit}.
\end{proof}
In light of this lemma, we define $\c{P}_o$ to be the space of non-negative functions $a:V\to[0,\infty)$ with $a(o)=0$ and $\Delta a(x) =-\mathds{1}(x=o)$, so that $a^h(\,\cdot\,,o)$ belongs to $\c{P}_o$ for each $o\in V$ and $h\in \c{H}$ by Lemma~\ref{lem:PK_harmonic}. We will later show that the map $h\mapsto a^h$ is an affine isopmorphism between the two convex spaces $\c{H}$ and $\c{P}_o$. We first describe how elements of $\c{P}_o$ can be used to define Doob transformed walks.
\paragraph{Doob transforms and singular conditioning.} We now define the Doob transform associated to an element of the space $\c{P}_o$. Given $a\in \c{P}_o$, we define $\wh{X}^a$ to be the Doob $a$-transform of the simple random walk $X$ on $G$, so that $\wh{X}^a$ has state space $\{x\in V: x=o$ or $a(x)>0\}$ and transition probabilities given by \[
\wh{p}^{\: a}(x, y) := \begin{cases}
\frac{a(y)}{a(x)} p(x, y) & \text{if } x \neq o \\
a(y) & \text{if } x = o, y \sim o
\end{cases} \] where $p$ is the transition kernel for the simple random walk. Similarly, given $h \in \c{H}$, we write $\wh{X}^h=\wh{X}^{a^h(\cdot,o)}$ where $a^h$ is the potential kernel associated to $h$.
Informally, we think of $\wh{X}^h$ as the walk that is ``conditioned to go to $h$ before returning to $o$''. (In particular, when the harmonic measure from infinity is unique and $\c{H}$ and $\c{P}_o$ are singleton sets, we think of the associated Doob transform as the random walk conditioned to never return to $o$.) We write $\wh{\mathbf{P}}_o^a$ or $\wh{\mathbf{P}}^h_o$ for the law of $\wh{X}^a$ or $\wh{X}^h$ as appropriate.
As before, it follows from this definition that if $a\in \c{P}_o$ and we write $X[0,m]$ for the initial segment consisting of the first $m$ steps of the random walk $X$ then \begin{multline} \label{eq:Doob_transform_whole_path} \wh{\mathbf{P}}^a_o(\wh{X}^a[0,m]=\gamma)=\prod_{i=1}^m \wh{p}^{\: a}(\gamma_{i-1}, \gamma_i) = a(\gamma_{1})\prod_{i=2}^m \frac{a(\gamma_i)}{a(\gamma_{i-1})} p(\gamma_{i-1}, \gamma_i) \\ = a(\gamma_{m})\prod_{i=2}^m p(\gamma_{i-1}, \gamma_i) =
\deg(o)a(\gamma_m) \mathbf{P}_o(X[0, m] = \gamma) \end{multline} for every finite path $\gamma=(\gamma_0,\ldots,\gamma_m)$ with $\gamma_0=o$ and $\gamma_i \neq o$ for every $i>0$. Summing over all paths $\gamma$ that begin at $o$, end at some point $x\neq o$, and do not visit $o$ or $x$ at any intermediate point yields in particular that if $h\in \c{H}$ then \begin{equation} \label{eq:hitting prop CRW} \wh{\mathbf{P}}^h_o(\wh{X} \text{ hits $x$}) = \deg(o) a^h(x,o) \mathbf{P}_o(T_x < T^+_o) = h_{o,x}(x), \end{equation} where the last equality follows from \eqref{eq: PK def} and the definition of the effective resistance.
\begin{lemma} Let $G=(V,E)$ be a recurrent graph and let $a\in \c{P}_o$. Then the associated Doob-transformed walk $\wh{X}^a$ is transient. \end{lemma}
\begin{proof} One can easily verify from the definitions that the sequence of reciprocals $(a(\wh{X}^a_n)^{-1})_{n\geq 1}$ is a non-negative martingale with respect to its natural filtration, and hence converges almost surely to some limiting random variable, which it suffices to prove is zero almost surely.
It follows from the identity \eqref{eq:Doob_transform_whole_path} that \[ \wh{\mathbf{P}}^a_o(a(\wh{X}^a_n)\leq M) = \sum_{v} \mathds{1}(a(v)\leq M) \deg(o)a(v) \mathbf{P}_o(X_n=v, T^+_o > n) \leq M\deg(o)\mathbf{P}_o(T^+_o > n), \] for every $n,M\geq 1$. Since $G$ is recurrent, the right hand side tends to zero as $n\to\infty$ for each fixed $M$. It follows that $\limsup_{n\to\infty} a(\wh{X}^a_n) = \infty$ almost surely, and hence that $\lim_{n\to\infty} a(\wh{X}^a_n)=\infty$ almost surely since the limit is well-defined almost surely. This implies that $\wh{X}^a$ is transient.
\end{proof}
\subsection{An affine isomorphism}
Let $G=(V,E)$ be recurrent, fix $o\in V$, and let $\c{P}_o$ denote the set of positive functions $a:V\to [0,\infty)$ with $a(o)=0$ that satisfy $\Delta a(\cdot) = -\mathds{1}(\,\cdot=o)$. As we have seen, for each $h\in \c{H}$ the potential kernel $a^h(\cdot,o)$ defines an element of $\c{P}_o$. Moreover, the map sending $h\mapsto a^h(\,\cdot\,,o)$ is affine in the sense that if $h=\theta h_1 + (1-\theta) h_2$ then $a^h(\,\cdot\,,o)=\theta a^{h_1}(\,\cdot\,,o)+(1-\theta)a^{h_2}(\,\cdot\,,o)$. We wish to show that this map defines an affine \emph{isomorphism} between $\c{H}$ and $\c{P}_o$ in the sense that it is bijective (in which case its inverse is automatically affine). We begin by constructing the inverse map from $\c{P}_o$ to $\c{H}$.
\begin{lemma} \label{lem:PK_surjectivity} Let $G=(V,E)$ be a infinite, connected, locally finite recurrent graph and let $o\in V$. For each $a\in \c{P}_o$ there exists a unique $h\in \c{H}$ satisfying \[h_B(u)=\wh{\mathbf{P}}^a_o(\wh{X}^a \text{ visits $B$ for the last time at $u$})\] for every finite set $B$ containing $o$. Moreover, this $h$ satisfies $a^h(x,o)=a(x)$ for every $x\in V$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:PK_surjectivity}] Fix $a\in \c{P}_o$. We define a the family of probability measures $h=(h_B:B\subset V$ finite$)$ by \[h_B(u)=\wh{\mathbf{P}}^a_o(\wh{X}^a \text{ visits $B$ for the last time at $u$})\] for every $u\in B$ if $o \in B$ and \begin{multline*}h_B(u)=\wh{\mathbf{P}}^a_o(\wh{X}^a \text{ visits $B\cup\{o\}$ for the last time at $u$})\\+ \wh{\mathbf{P}}^a_o(\wh{X}^a \text{ visits $B\cup\{o\}$ for the last time at $o$})\mathbf{P}_o(X_{T_B}=u) \end{multline*} for every $u\in B$ if $o\notin B$, so that if $o\notin B$ then \[ h_B(u)=\sum_{v\in B\cup\{o\}}h_{B\cup\{o\}}(v)\mathbf{P}_v(X_{T_B}=u) \] for every $u\in V$. We claim that this defines an element of $\c{H}$. It is clear that $h_B$ is a probability measure that is supported on $\partial B$ for each finite set $B\subset V$; we need to verify that it satisfies the consistency property \eqref{eq:consistency}. Once it is verified that $h\in \c{H}$, the fact that $a=a^h(\cdot,o)$ follows easily from the definition of $a^h$ together with the identity \eqref{eq:Doob_transform_whole_path}, which together yield that \begin{align*} a^h(v,o)&=h_{v,o}(v)\mathcal{R}_{\mathrm{eff}}(v \leftrightarrow o) = \frac{\wh{\mathbf{P}}^a_o(\wh{X}^a \text{ visits $\{o,v\}$ for the last time at $v$}) }{\deg(o)\mathbf{P}_o(T_v<T^+_o)}\\ &=\frac{\wh{\mathbf{P}}^a_o(\wh{X}^a \text{ hits $v$}) }{\deg(o)\mathbf{P}_o(T_v<T^+_o)}=\frac{\deg(o)a(v)\mathbf{P}_o(T_v<T^+_o)}{\deg(o)\mathbf{P}_o(T_v<T^+_o)}=a(v) \end{align*} for each $v\in V$.
We now prove that $h$ satisfies the consistency property \eqref{eq:consistency}. We will prove the required identity in the case $o\in B$, the remaining case $o\notin B$ following from this case and the definition. Let $B\subseteq B'$ be finite sets with $o\in B$ and let $(V_n)_{n\geq 1}$ be an exhaustion of $V$ by finite sets such that $B' \subseteq V_n$ for every $n\geq 1$. Writing $V_n^c=V\setminus V_n$ for each $n\geq 1$ and $\tau_n$ for the first time the walk visits $V_n^c$, we have that \begin{align*} h_B(u) &= \lim_{n\to\infty} \wh{\mathbf{P}}^a_o(\wh{X}[0,\tau_n] \text{ last visits $B$ at $u$})\\ &= \lim_{n\to\infty}\sum_{b \in V_n^c} \wh{\mathbf{P}}^a_o(\wh{X}[0,\tau_n] \text{ last visits $B$ at $u$, $\wh{X}_{\tau_n}=b$}) \end{align*} and hence by \eqref{eq:Doob_transform_whole_path} and time-reversal that \begin{align} h_B(u) &= \lim_{n\to\infty}\sum_{b \in V_n^c} \deg(o)a(b) \mathbf{P}_o(X[0,\tau_n] \text{ last visits $B$ at $u$, $X_{\tau_n}=b$}) \nonumber\\ &= \lim_{n\to\infty}\sum_{b \in V_n^c} \deg(b)a(b) \mathbf{P}_b(X_{T_B}=u, T_o<T_{V_n^c}^+).\label{eq:h_B_time_reverse}
\end{align} It follows from this together with the strong Markov property that \begin{align*} h_B(u) &= \lim_{n\to\infty}\sum_{v\in B'} \sum_{b \in V_n^c} \deg(b)a(b) \mathbf{P}_b(X_{T_B'}=v, X_{T_B}=u, T_o<T_{V_n^c}^+) \\ &=\lim_{n\to\infty}\sum_{v\in B'} \sum_{b \in V_n^c} \deg(b)a(b) \mathbf{P}_b(X_{T_B'}=v, T_{B'}<T_{V_n^c}^+)\mathbf{P}_v(X_{T_B}=u, T_{o}<T_{V_n^c}^+). \end{align*} Now, we have by the strong Markov property that for each $b \in V_n^c$ and $v \in B'$ \begin{align*}
\bf{P}_b(X_{T_{B'}} = v, T_{o} < T_{V_n^c}^+) = \bf{P}_b(X_{T_{B'}} = v, T_{B'} < T_{V_n^c}^+)\bf{P}_v(T_{V_n^c} > T_o). \end{align*} and by recurrence that $\lim_{n\to\infty}\mathbf{P}_v(T_{o}<T_{V_n^c}^+)=1$, so that \begin{align*} \label{eq:h_B'_time_reverse} h_B(u) =\lim_{n\to\infty}\sum_{v\in B'} \sum_{b \in V_n^c} \deg(b)a(b) \mathbf{P}_b(X_{T_B'}=v, T_{o}<T_{V_n^c}^+)\mathbf{P}_v(X_{T_B} = u). \end{align*} The claimed identity \eqref{eq:consistency} follows from this together with the identity \eqref{eq:h_B_time_reverse} applied to the larger set $B'$.\qedhere
\begin{comment} Let $\varepsilon>0$ and for each let $w\in V$ let $\mathbf{P}_{w,\varepsilon}$ denote the joint law of the simple random walk $X$ started at $w$ and an independent geometric random time $\tau$ of mean $\varepsilon^{-1}$. We will prove consistency with the help of the limit formula \begin{equation} \label{eq:geometric_time_limit} h_B(u) = \lim_{\varepsilon\downarrow 0} \varepsilon \sum_{w\in V} \deg(w)a(w) \mathbf{P}_{w,\varepsilon}(X_{T_B}=u, \tau \geq T_B), \end{equation} which we claim holds for every finite set $B$ containing $o$ and every $u\in B$. We first prove the formula and then use it to deduce consistency.
Fix $B\subset V$ finite containing $o$ and $u\in B$. Let $\wh{\mathbf{P}}^a_{o,\varepsilon}$ denote the joint law of the Doob-transformed walk $\wh{X}^a$ and an independent geometric random time $\tau$. Since $\wh{X}^a$ is transient, the probability that it visits $B$ after time $\tau$ tends to zero as $\varepsilon\to 0$, and it follows that \begin{align*} h_{B}(u)&=\wh{\mathbf{P}}^a_o(\wh{X}^a \text{ visits $B$ for the last time at $u$})\\ &=\lim_{\varepsilon\downarrow 0}\wh{\mathbf{P}}^a_{o,\varepsilon}(\wh{X}^a[0,\tau] \text{ visits $B$ for the last time at $u$})\\ &=\lim_{\varepsilon\downarrow 0}\sum_{w\in V}\wh{\mathbf{P}}^a_{o,\varepsilon}(\wh{X}^a[0,\tau] \text{ visits $B$ for the last time at $u$, $\wh{X}^a_\tau=w$}). \end{align*}
Thus, it follows from \eqref{eq:Doob_transform_whole_path}, time-reversal, and the strong Markov property that \begin{align*} h_{B}(u)&=\lim_{\varepsilon\downarrow 0}\sum_{w\in V}\deg(o)a(w)\mathbf{P}_{o,\varepsilon}(X[0,\tau] \text{ visits $B$ for the last time at $u$, $X_\tau=w$, $T_o > \tau$})\\ &=\lim_{\varepsilon\downarrow 0}\sum_{w\in V}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B}}=u, \tau = T_o) \\ &=\lim_{\varepsilon\downarrow 0}\sum_{w\in V}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B}}=u, T_B \leq \tau) \mathbf{P}_{u,\varepsilon}(\tau = T_o). \end{align*} Since $T_o$ is almost surely finite, $\mathbf{P}_{u,\varepsilon}(\tau = T_o)=\mathbb{E}[\varepsilon(1-\varepsilon)^{T_o}]\sim \varepsilon$ as $\varepsilon\to 0$ for each fixed $u\in V$, so that \[ h_{B}(u) = \lim_{\varepsilon\downarrow 0}\varepsilon \sum_{w\in V}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B}}=u, T_B \leq \tau) \] as claimed.
We now argue that the formula \eqref{eq:geometric_time_limit} implies the consistency of $h$. Indeed, if $B \subset B'$ are two finite sets both containing $o$ then \begin{align*} h_B(u)&=\lim_{\varepsilon\downarrow 0}\varepsilon \sum_{w\in V}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B}}=u, T_B \leq \tau) \\ &=\lim_{\varepsilon\downarrow 0}\varepsilon \sum_{w\in V} \sum_{v\in B'}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B'}}=v, X_{T_{B}}=u, T_B \leq \tau)\\ &=\lim_{\varepsilon\downarrow 0}\varepsilon \sum_{w\in V} \sum_{v\in B'}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B'}}=v, T_{B'} \leq \tau)\mathbf{P}_{v,\varepsilon}(X_{T_{B}}=u, T_{B} \leq \tau)\\ &=\lim_{\varepsilon\downarrow 0}\varepsilon \sum_{w\in V} \sum_{v\in B'}\deg(w)a(w)\mathbf{P}_{w,\varepsilon}(X_{T_{B'}}=v, T_{B'} \leq \tau)\mathbf{P}_{v}(X_{T_{B}}=u) \\ &=\sum_{v\in B'} h_{B'}(v)\mathbf{P}_v(X_{T_B}=u), \end{align*} where in the penultimate line we used that $\mathbf{P}_{v,\varepsilon}(X_{T_{B}}=u, T_B\leq \tau)\sim \mathbf{P}_{v}(X_{T_{B}}=u)$ as $\varepsilon\to 0$ since $T_B$ is almost surely finite. This establishes the desired consistency relation in the case that $B$ and $B'$ both contain $o$; the remaining cases follow easily from the definition and we omit the details.
\end{comment} \end{proof}
\begin{theorem} \label{cor:affine_isomorphism} Let $G$ be an infinite, recurrent, locally finite graph, and let $o\in V$. The map $h\mapsto a^h(\cdot,o)$ is an affine isomorphism $\c{H}\to \c{P}_o$. In particular, this map identifies extremal elements of $\c{H}$ with extremal elements of $\c{P}_o$.
\end{theorem}
\begin{proof} It remains only to prove that $h\mapsto a^h$ is injective. To prove this it suffices by definition of $a^h$ to prove that $h_B$ is determined by $(h_{x,o}(x):x\in \partial B)$ for each finite set $B\subset V$ containing the vertex $o$. Fix one such set $B$. We have by definition of $\c{H}$ that \[h_{x,o}(x)=\sum_{y\in \partial B} h_B(y) \mathbf{P}_y(T_x<T_o)=\sum_{y\in \partial B} A(x,y) h_B(y)\] for each $x\in \partial B$ where $A(x,y):=\mathbf{P}_y(T_x<T_o)$ for each $x,y\in \partial B$, so that it suffices to prove that the matrix $A$ (which is indexed by $\partial B$) is invertible. Define a matrix $Q$ indexed by $\partial B$ by \[ Q(x,y)=\mathbf{P}_y(T_{\partial B}^+<T_o, X_{T_{\partial B}^+}=x). \] Then we have by the strong Markov property that \begin{multline*} A(x,y) - \mathds{1}(x=y)\mathbf{P}_x(T_x^+\geq T_o) = \mathbf{P}_y(T_x^+<T_o) \\= \sum_{z\in \partial B} \mathbf{P}_z(T_x<T_o)Q(z,y) = \mathbf{P}_x(T_x^+\geq T_o)Q(x,y) +\sum_{z\in \partial B} \mathbf{P}_z(T_x^+<T_o)Q(z,y) \end{multline*} and hence inductively that \[ \mathbf{P}_y(T_x^+<T_o) = \mathbf{P}_x(T_x^+\geq T_o)\sum_{i=1}^n Q^n(x,y)+\sum_{z\in \partial B} \mathbf{P}_z(T_x^+<T_o)Q^n(z,y) \] for every $n\geq 1$. Since $Q$ is irreducible and substochastic, we can take the limit as $n\to\infty$ to obtain that \[ A(x,y) = \mathds{1}(x=y)\mathbf{P}_x(T_x^+\geq T_o) + \mathbf{P}_y(T_x^+<T_o) = \mathbf{P}_x(T_x^+\geq T_o) \sum_{i=0}^\infty Q^n(x,y) \] for every $x,y\in \partial B$. It follows by a standard argument that the matrix $A$ is invertible with inverse $A^{-1}=\mathbf{P}_x(T_x^+\geq T_o)^{-1}(1-Q)$ as required. \end{proof}
\subsection{The Liouville property for extremal Doob transforms}
In this section we prove a kind of tail-triviality property of the Doob-transformed walk corresponding to an extremal point $h\in \c{H}$. Letting $G=(V,E)$ be a graph, we recall that an event $A\subseteq V^\mathbb{N}$ is said to be \emph{invariant} if $(x_0,x_1,\ldots)\in A$ implies that $(x_1,x_2,\ldots)\in A$ for every $(x_0,x_1,\ldots)\in V^\mathbb{N}$.
\begin{theorem} \label{thm:Liouville} Let $G=(V,E)$ be an infinite, connected, recurrent, locally finite graph and let $o\in V$. If $h\in \c{H}$ is extremal then the Doob transformed random walk $\hat{X}^h$ does not have any non-trivial invariant events: If $A \subseteq V^\mathbb{N}$ is an invariant event then $\wh{\mathbf{P}}^h_o(A)\in \{0,1\}$. \end{theorem}
\begin{proof} It suffices to prove the corresponding statement for $\wh{X}^a$ when $a$ is an extremal element of $\c{P}_o$. Suppose not, so that $A$ is a non-trivial invariant event. We have by Levy's 0-1 law that \begin{equation} \label{eq:Levy} \mathbf{P}_o(\wh{X}^a\in A \mid \wh{X}^a_1,\ldots,\wh{X}^a_n)\to \mathds{1}(\wh{X}^a\in A) \text{ almost surely as $n\to\infty$}. \end{equation} Moreover, we also have by invariance that \[ \wh{\mathbf{P}}^a_x(\wh{X}^a \in A) = \sum_{y \in V} \frac{a(y)}{a(x)} p(x,y) \wh{\mathbf{P}}^a_y(\wh{X}^a \in A) \] and that \[ \wh{\mathbf{P}}^a_o(\wh{X}^a\in A) = \sum_{y \in V} a(y) \wh{\mathbf{P}}^a_y(\wh{X}^a \in A). \] Since similar inequalities hold when we replace $A$ by $A^c$ it follows that we can write $a$ as a non-trivial convex combination of two elements of $\c{P}_o$ \[ a(x)= \wh{\mathbf{P}}^a_o(\wh{X}^a \in A) \cdot \frac{a(x)\wh{\mathbf{P}}^a_x(\wh{X}^a \in A)}{\wh{\mathbf{P}}^a_o(\wh{X}^a \in A) } + \wh{\mathbf{P}}^a_o(\wh{X}^a \notin A) \cdot \frac{a(x)\wh{\mathbf{P}}^a_x(\wh{X}^a \notin A)}{\wh{\mathbf{P}}^a_o(\wh{X}^a \notin A) }, \] these two factors being different by \eqref{eq:Levy}, contradicting extremality of $a$. \end{proof}
\begin{remark} Underlying this proposition is the fact that once we fix $a\in\c{P}_o$, we can identify $\c{P}_o$ with the Martin boundary of the conditioned walk $\wh{X}^a$. Theorem~\ref{thm:Liouville} is the recurrent version of the fact that Doob transforming by an extremal element of the Martin boundary yields a process with trivial invariant sigma-algebra. \end{remark}
For our purposes, the most important output of the Liouville property is the following proposition, which lets us easily tell apart the trajectories of two different Doob transformed walks $\wh{X}^h$ and $\wh{X}^{h'}$ by looking at any infinite subset of their traces (and, in particular, from their loop-erasures).
\begin{proposition} \label{prop:identifying_h_from_trace} Let $h,h'$ be distinct extremal elements of $\c{H}$ and let $\wh{X}^h$ be the Doob-transformed simple random walk corresponding to $h$. Then \[ \frac{a^{h'}(\wh{X}^h_n,o)}{a^h(\wh{X}^h_n,o)} \to 0 \] almost surely as $n\to\infty$. \end{proposition}
\begin{proof} We prove the corresponding statement in which $a,a'$ are distinct extremal elements of $\c{P}_o$. Let $\wh{X}$ and $\wh{X}'$ have laws $\wh{\mathbf{P}}_o^a$ and $\wh{\mathbf{P}}_o^{a'}$ respectively. One can easily verify from the definitions that \[
(Z_n)_{n\geq 1}=\left(\frac{a'(\wh{X}_n)}{a(\wh{X}_n)}\right)_{n\geq 1} \qquad \text{ and } \qquad (Z_n')_{n\geq 1}=\left(\frac{a(\wh{X}'_n)}{a'(\wh{X}'_n)}\right)_{n\geq 1} \] are both non-negative martingales with respect to their natural filtrations, and hence converge almost surely to some limiting random variables $Z$ and $Z'$. Since $Z$ and $Z'$ are measurable with respect to the invariant $\sigma$-algebras of $\wh{X}$ and $\wh{X}'$ respectively and $a$ and $a'$ are both extremal, there must exist constants $\alpha$ and $\alpha'$ such that $Z=\alpha$ and $Z'=\alpha'$ almost surely.
We also have that $\mathbb{E} Z_n = \mathbb{E} Z_n ' = 1$ for every $n\geq 1$ and hence that $\alpha,\alpha'\leq 1$. We wish to prove that $\alpha=0$.
It follows from \eqref{eq:Doob_transform_whole_path} that the conditional distributions of the initial segments $\wh{X}[0,m]$ and $\wh{X}'[0,m]$ are the same if we condition on $\wh{X}_m=\wh{X}'_m=v$ for any $v\in V$ for any $v\in V$ and $m\geq 1$ and that \[ \frac{\mathbf{P}_o^a(\wh{X}_m=v)}{\mathbf{P}_o^{a'}(\wh{X}^{a'}_m=v)} = \frac{a(v)}{a'(v)} \] for every $m\geq 1$ and $v\in V$. If $\alpha>0$ then for every $\varepsilon>0$ there exists $M$ such that the distribution of $\wh{X}_m$ puts mass at least $1-\varepsilon$ on the set of vertices with $a'(v)/a(v) \geq (1-\varepsilon)\alpha $ for every $m\geq M$, and it follows that for each $m\geq M$ there is a coupling of the two walks $\wh{X}'$ and $\wh{X}$ so that their initial segments of length $m$ coincide with probability at least $(1-\varepsilon)^2\alpha$. Taking a weak limit as $m\to\infty$ and $\varepsilon \to 0$, it follows that there exists a coupling of the two walks $\wh{X}'$ and $\wh{X}$ such that the two walks coincide forever with probabilty at least $\alpha>0$. If we couple the walks in this way then on this event we must have that $Z'=1/Z$, which can occur with positive probability only if $\alpha'=1/\alpha$. Since $\alpha,\alpha'\leq 1$ we must have that $\alpha=\alpha'=1$ and that we can couple the two walks to be exactly the same almost surely. This is clearly only possible if $a=a'$, and since $a\neq a'$ by assumption we must have that $\alpha=0$.
\end{proof}
\subsection{Potential kernels and the uniform spanning tree}
We now use Lemma~\ref{L: local convergence} to show that the UST of a recurrent graph can always be sampled using a variant of Wilson's algorithm \cite{WilsonAlgorithm,BLPS} in which we `root at a point in $\c{H}$', where again we are thinking intuitively of $\c{H}$ as a kind of boundary at infinity of the graph. Fix $h \in \mathrm{ex}(\c{H})$ and let $\wh{X}^h$ be the conditioned walk of the previous section. Fix some enumeration $V=\{v_1,v_2,\ldots\}$ of $V$ with $v_1 = o$. Set $E_0 = \LE(\wh{X}^h[0, \infty))$ (which is well defined because $\wh{X}^h$ is transient) and for each $i\geq 1$ define $E_i$ given $E_{i- 1}$ recursively as follows: \begin{itemize}
\item if $v_i \in E_{i - 1}$, set $E_i = E_{i - 1}$
\item otherwise, set $E_i = E_{i -1} \cup \LE(Y[0, \tau))$ where $Y$ is the simple random walk started at $v_i$ and stopped at $\tau$, the hitting time of $E_{i - 1}$. \end{itemize} Last, define $T = \bigcup_{i = 0}^\infty E_i$. We refer to this procedure as \textbf{Wilson's algorithm rooted at $h$}. The random tree $T$ generated by Wilson's algorithm rooted at $h$ is clearly a spanning tree of $G$; the next lemma shows that it is distributed as the UST of $G$.
\begin{lemma}[Wilson meets Doob] \label{lem:Wilson} Let $G=(V,E)$ be an infinite, connected, locally finite, recurrent graph and let $h\in \mathrm{ext}(\c{H})$.
The tree $T$ generated by Wilson's algorithm rooted at $h$ is distributed as the uniform spanning tree of $G$. In particular, the law of $T$ is independent of the chosen enumeration of $V$ and the choice of $h\in \mathrm{ext}(\c{H})$. \end{lemma}
\begin{remark} It follows by taking convex combinations that the same statement also holds when $h$ is \emph{not} extremal. \end{remark}
We will deduce Lemma~\ref{lem:Wilson} from the following lemma, which allows us to think of the Doob-transformed walk $\wh{X}^h$ as a limit of conditioned simple random walks on $G$. For the purposes of this lemma we think of our walks as belonging to the space of sequences in $V$ equipped with the product topology.
\begin{lemma}[Local convergence] \label{L: local convergence} Let $G=(V,E)$ be an infinite, connected, locally finite, recurrent graph and suppose that $z_n$ is a sequence of vertices of $G$ such that $z_n$ converges to $h\in \c{H}$.
If $X$ denotes the random walk on $G$ started at $o$ and $\wh{X}^h$ denotes the Doob-transformation of $X$ as above, then the conditional law of $X$ given that it hits $z_n$ before first returning to $o$ converges weakly to the law of $\wh{X}^h$. \end{lemma}
\begin{proof}[Proof of Lemma~\ref{L: local convergence}]
This is a classical result concerning Doob transforms, and can also be deduced from the limit formula \eqref{eq:PK_limit}. We give a brief proof. Let $T_{z_n}$ be the first time the walk hits $z_n$, let $T_o^+$ be the first positive time the walk hits $o$, and let $\varphi = (o, \varphi_1, \ldots, \varphi_m)$ be a path of length $m$ for some $m\geq 1$ with $\varphi_i\neq o$ for every $i>0$. By the Markov property for the simple random walk,
\[
\mathbf{P}_o(X[0, m] = \varphi, T_{z_n} < T_o^+) = \mathbf{P}_o(X[0, m] = \varphi)\mathbf{P}_{\varphi_m}(T_{z_n} < T_o),
\] and it follows from \eqref{eq:Doob_transform_whole_path} that
\[
\mathbf{P}_o(X[0, m] = \varphi, T_{z_n} < T_o^+) = \frac{1}{\deg(o)a^h(\varphi_m, o)} \mathbf{P}_o(\wh{X}^h[0, m] = \varphi)\mathbf{P}_{\varphi_m}(T_{z_n} < T_o).
\]
The result follows once multiplying both sides by the effective resistance between $o$ and $z_n$ and using the representation \eqref{eq: PK def} for the potential kernel. \end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Wilson}] The standard implementation of Wilson's algorithm rooted at $z_n$ allows us to sample the uniform spanning tree of $G$ in a manner exactly analogous to above, except that we start with a walk run from $o$ until it first hits $z_n$. Now, it is a combinatorial fact that the \emph{loop erasure} of the walk run from $o$ until it first hits $z_n$ does not change its distribution if we condition the walk to hit $z_n$ before returning to $o$: Indeed, the loop-erasure of the entire unconditioned walk is equal to the loop-erasure of the final segment of the walk between its last visit to $o$ and its first visit to $z_n$, and this final segment is distributed as the conditioned walk. Thus, in the standard implementation of Wilson's algorithm, we do not change the distribution of the obtained tree if we condition the first walk to hit $z_n$ before returning to $o$. The claim then follows by taking the limit as $z_n\to\infty$ and using Lemma~\ref{L: local convergence}.
\end{proof}
This leads to the following connection between the ends of the UST and the extremal points of the set of harmonic measures from infinity $\c{H}$.
\begin{proposition} \label{prop:ends_and_H} Let $G=(V,E)$ be an infinite, connected, locally finite, recurrent graph, let $T$ be the uniform spanning tree of $T$, and let $H$ be a countable subset of $\mathrm{ext}(\c{H})$. Almost surely, for each $h\in H$ there exists an infinite simple path $\Gamma=(\Gamma_1,\Gamma_2,\ldots)$ in $T$ such that \[ \frac{a^{h'}(\Gamma_i,x)}{a^{h}(\Gamma_i,x)}\to 0 \qquad \text{ as $i\to\infty$ for each $h' \in H \setminus \{h\}$.} \] In particular, $T$ almost surely has at least as many ends as there are extremal points of $\c{H}$. \end{proposition}
(In the last sentence of this proposition we are not distinguishing between different infinite cardinalities, but merely claiming that if $\c{H}$ has infinitely many extremal points then $T$ has infinitely many ends almost surely.)
\begin{proof} This is an immediate consequence of Proposition~\ref{prop:identifying_h_from_trace} and Lemma~\ref{lem:Wilson}. \end{proof}
\noindent \textbf{Orientations.} Let $G=(V,E)$ be an infinite, connected, locally finite, recurrent graph and let $h\in \mathrm{ext}(\c{H})$. When we generate the UST $T$ of $G$ using Wilson's algorithm rooted at $h$, the algorithm also provides a natural \emph{orientation} of $T$, where each edge is oriented in the direction that it is crossed by the loop-erased random walk that contributed that edge to the tree. When $T$ almost surely has the same number of ends as there are extremal points in $\c{H}$, and both numbers are finite (which will always be the case in the unimodular setting by the results of \cite{BvE21}), it follows from Proposition~\ref{prop:ends_and_H} that this orientation is a.s.\ determined by the (unoriented tree): Almost surely, for each $h\in \c{H}$ and $v\in V$ there is a unique infinite ray $(\Gamma_1,\Gamma_2,\ldots)$ starting from $v$ such that \[ \frac{a^{h'}(\Gamma_i,v)}{a^{h}(\Gamma_i,v)}\to 0 \qquad \text{ as $i\to\infty$ for each $h' \in \mathrm{ext}(\c{H}) \setminus \{h\}$,} \] and if we orient the tree in the direction of this ray we must recover the same orientation as if we had generated the oriented tree using Wilson's algorithm rooted at $h$. This fact will play a key role in the proof of our main theorem.
\section{Proof of the main theorem} \label{sec:main_proof}
\subsection{Reversible and unimodular graphs} \label{subsec:definitions}
We now give a very brief introduction to unimodular random rooted graphs, referring the reader to \cite{AldousLyonsUnimod2007, CurNotes} for detailed introductions. Let us just recall that $\c{G}_{\bullet, \bullet}$ is the separable metric space of doubly rooted graphs $(G, x, y)$ (modulo graph isomorphisms), equipped with the \textit{local topology}, also known as \textit{Benjamini-Schramm topology}. Similarly defined is the space $\c{G}_{\bullet}$ of rooted graphs $(G, o)$. A \textit{mass transport} is a measurable function $f: \c{G}_{\bullet, \bullet} \to [0, \infty]$. A measure $\mathbb{P}$ on $\c{G}_{\bullet}$ is called \textit{unimodular} whenever the \emph{mass transport principle} \[ \wh{\mathbb{E}} \left[\sum_{x \in V} f(G, o, x)\right] = \wh{\mathbb{E}} \left[\sum_{x \in V} f(G, x, o)\right] \] holds for all mass transports $f$. A probability measure $\mathbb{P}$ on $\c{G}_{\bullet}$ is called \textit{reversible} if $(G, o, X_1) \overset{d}{=} (G, X_1, o)$ where $X_1$ is the first step of the simple random walk. The law $\mathbb{P}$ is called \textit{stationary} if $(G, o) \overset{d}{=} (G, X_1)$ and clearly any reversible graph is stationary. For recurrent graphs, stationarity and reversibility are equivalent \cite{BenjaminiCurien2012}.
If $\mathbb{P}$ is the law of a unimodular random graph, with finite expected degree, then biasing it by $\deg(o)$ gives a reversible random graph and whenever $\mathbb{P}$ is the law of a reversible random graph, then biasing by $\deg(o)^{-1}$ gives a unimodular random graph; see for example \cite{BenjaminiCurien2012}.
A set $A\subseteq \c{G}_\bullet$ is said to be \textbf{rerooting invariant} if $((g,v)\in A)\Rightarrow ((g,u)\in A)$ for every rooted graph $(g,v) \in \c{G}_\bullet$ and every $u$ in the vertex set of $g$. A unimodular random rooted graph $(G,o)$ is said to be \textbf{ergodic} if it has probability $0$ or $1$ to belong to any given re-rooting invariant event in $\c{G}_\bullet$. As explain in \cite[Section 4]{AldousLyonsUnimod2007}, this is equivalent to the law of $(G,o)$ being extremal in the weakly compact convex set of unimodular probability measures on $\c{G}_\bullet$. As such, it follows by Choquet theory that every unimodular measure on $\c{G}_\bullet$ may be written as a mixture of ergodic unimodular measures. For our purposes, the upshot of this is that we may assume without loss of generality that $(G,o)$ is ergodic when proving Theorem~\ref{T:main}.
We will also rely on the following characterization of two-ended unimodular random rooted graphs due to Bowen, Kun, and Sabok \cite{bowen2021perfect}, which builds on work of Benjamini and the second author \cite{BenHutch}. Here, a graph $G$ is said to have \textbf{linear volume growth} if for each vertex $v$ of $G$ there exists a constant $C_v$ such that $|B(v,r)|\leq C_v r$ for every $r\geq 1$, where $B(v,r)$ denotes the graph distance ball of radius $r$ around $v$.
\begin{proposition}[\cite{bowen2021perfect}, Proposition 2.1] \label{prop:two_ended=linear_growth} Let $(G,o)$ be an infinite unimodular random rooted graph. Then $G$ is two-ended almost surely if and only if it has linear volume growth almost surely. \end{proposition}
To prove Thorem~\ref{T:main}, it will therefore suffice to prove that if $(G,o)$ is a recurrent unimodular random rooted graph whose UST is two-ended almost surely then $G$ has linear volume growth almost surely.
\subsection{The effective resistance is linear on the spine} \label{s: linear growth}
Let $\mathbb{P}$ be the joint law of an ergodic recurrent unimodular random rooted graph $(G,o)$ and its uniform spanning tree $T$, which we think of as a triple $(G,o,T)$. It follows by tail triviality of the UST \cite[Theorem 8.3]{BLPS} that the number of ends of $T$ is deterministic conditional on $(G,o)$, and since $(G,o)$ is ergodic that $T$ has some constant number of ends almost surely. Moreover, it follows from \cite[Theorem 6.2 and Proposition 7.1]{AldousLyonsUnimod2007} that this number of ends is either $1$ or $2$ almost surely, so that $T$ is either one-ended almost surely or two-ended almost surely.
We wish to prove that if $T$ is two-ended almost surely then $G$ is two-ended almost surely also. We will rely on the following theorem of Berestycki and the first author.
\begin{theorem}[\cite{BvE21}, Theorem 1] \label{T:harm and UST}
Let $(G, o)$ be a recurrent unimodular random rooted graph with $\mathbb{E}\deg(o)<\infty$. Almost surely, the uniform spanning tree of $G$ is one-ended if and only if the harmonic measure from infinity is uniquely defined. \end{theorem}
To avoid the unnecessary assumption that $\mathbb{E} \deg(o)<\infty$, we will use the following mild generalization of this theorem, whose proof is given in Appendix~\ref{Appendix:BvE}.
\begin{theorem}\label{T:harm and UST_no_degree_assumption}
Let $(G, o)$ be a recurrent unimodular random rooted graph. Almost surely, the uniform spanning tree of $G$ is one-ended if and only if the harmonic measure from infinity is uniquely defined. \end{theorem}
It follows from this theorem together with Proposition~\ref{prop:ends_and_H} that if $T$ is two-ended almost surely then
$|\mathrm{ext}(\c{H})|=2$ almost surely.
Suppose that $T$ is two-ended almost surely and let $\c{S}$ be the \textbf{spine} of $T$, i.e., the unique double-infinite simple path contained in $T$. We give $T$ an orientation by choosing uniformly at random one of the two ends of $\c{S}$ and directing every edge towards that end, letting the resulting oriented tree be denoted $T^\rightarrow$ with oriented spine $\c{S}^\rightarrow$. Since the law of $T^\rightarrow$ is a rerooting-equivariant function of the graph $(G,o)$, the triple $(G,T^\rightarrow,o)$ is unimodular. Since ``everything that can happen somewhere can happen at the root" \cite[Lemma 2.3]{AldousLyonsUnimod2007} we also have that the origin belongs to $\c{S}$ with positive probability
and hence that we can define a law $\mathbb{P}_{\mathcal{S}}$ on triplets $(G, T^\rightarrow, o)$ (which we can view as a rooted network) by conditioning $o$ to belong to~$\c{S}$.
The law $\mathbb{P}_{\mathcal{S}}$ has the very useful property that it is stationary under shifts along the spine, which we now define. Each vertex $v \in \c{S}$ has a unique oriented edge emanating from it in $\c{S}^\rightarrow$, and we will write $\sigma(v)$ for the vertex on the other end of this edge. The map $v\mapsto \sigma(v)$ can be thought of as a shift, following the orientation along the spine, and there is also a well-defined backwards shift $\sigma^{-1}$ mapping each $x\in \c{S}$ to the unique vertex $v\in\c{S}$ with $\sigma(v)=x$.
\begin{lemma}\label{lem:stationary}
The law $\mathbb{P}_{\mathcal{S}}$ is invariant under the shift $\sigma$.
\end{lemma}
\begin{proof}
Let $A$ be any Borel set of triples $(g,t^\rightarrow,v)$ where $(g,v)$ is a rooted graph and $t^\rightarrow$ is an oriented spanning tree of $g$, and define the mass transport
\[
f(g,t^\rightarrow, v, w) := \mathds{1}\left(\text{$t^\rightarrow$ is two-ended, $w$ is in the spine of $t^\rightarrow$, $v=\sigma(w)$, and $(g,t^\rightarrow,w)\in A$}\right).
\]
Note that there only exists one vertex $v$ such that $v = \sigma(w)$ and, vice-versa, for each $v$ in the spine of $t^\rightarrow$ there is only one $v$ in the spine of $t^\rightarrow$ such that $\sigma(v) = o$ and $v \in \mathcal{S}$.
Therefore,
\[
\sum_{v \in V} f(G,T^\rightarrow, v, o) = \mathds{1}\left(\text{$T^\rightarrow$ is two-ended, $o$ is in the spine of $T^\rightarrow$, and $(G,T^\rightarrow,o)\in A$}\right)
\]
and
\[
\sum_{v \in V} f(G,T^\rightarrow, v, o) = \mathds{1}\left(\text{$T^\rightarrow$ is two-ended, $o$ is in the spine of $T^\rightarrow$, and $(G,T^\rightarrow,\sigma(o))\in A$}\right)
\]
Using the mass-transport principle we thus have that
\begin{multline*}
\mathbb{P}\left(\text{$T^\rightarrow$ is two-ended, $o$ is in the spine of $T^\rightarrow$, and $(G,T^\rightarrow,o)\in A$}\right) \\=
\mathbb{P}\left(\text{$T^\rightarrow$ is two-ended, $o$ is in the spine of $T^\rightarrow$, and $(G,T^\rightarrow,\sigma(o))\in A$}\right)
\end{multline*}
which shows the result because $\mathbb{P}(o \in \c{S}) > 0$ and $T$ is two-ended a.s. by assumption. \end{proof}
The main goal of this section is to show that along the spine of the UST, the effective resistances on the original graph must grow linearly under the assumption that the UST has two ends (and thus a well-defined spine). Heuristically, this tells us that if a graph is unimodular and the uniform spanning tree is two-ended, then the actual graph should in some sense be ``close'' to the line $\mathbb{Z}$.
\begin{proposition} \label{P:two-ends->linear}
The limit $\lim_{n\to\infty}\frac{1}{n} \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^n(o))=\lim_{n\to\infty}\frac{1}{n} \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^{-n}(o))$ exists and is positive
$\mathbb{P}_{\mathcal{S}}$-a.s. \end{proposition}
Note that the existence part of this proposition is an immediate consequence of the subadditive ergodic theorem; the content of the proposition is that the limit is positive.
As discussed above, it follows from Proposition~\ref{prop:ends_and_H} and Theorem~\ref{T:harm and UST} that, $\mathbb{P}_{\c{S}}$-almost surely, there are exactly two extremal elements of $\c{H}$, which we call ``$\ell$'' and ``$r$'', which satisfy \begin{align} \label{eq:identifying_ends} \frac{a^r(\sigma^n(o), v)}{a^\ell(\sigma^n(o), v)} \to \begin{cases} \infty & \text{ as $n\to +\infty$}\\ 0 & \text{ as $n\to-\infty$}\end{cases}
\end{align} for every $v\in V$. (In particular, the random choice of orientation of $T$ we made when defining $\mathbb{P}_{\c{S}}$ is equivalent to randomly choosing which of the two extremal elements of $\c{H}$ to call ``$r$''.)
Consider the function $V\to\mathbb{R}$ defined by \[
M_{o}(x) := a^r(x, o)-a^\ell(x, o). \]
We will show that $M_{o}(\sigma^n(o))$ grows linearly in $n$ and deduce from this that the effective resistance does too. The latter fact can be seen using \eqref{eq: PK def}, from which it follows that \[
M_{o}(x) = (r_{x, o}(x) - \ell_{x, o}(x)) \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow x). \] In the remainder we will slightly abuse notation to write $M_{m}(n) := M_{\sigma^m(o)}(\sigma^n(o))$ for $n, m \in \mathbb{Z}$. The first main ingredient is that $M_o(n)$ is an \emph{additive cocyle}. \begin{lemma}\label{lem:cocycle}
$M_o(n + m) = M_o(n) + M_{n}(n + m)$ for every $n,m\in \mathbb{Z}$. \end{lemma}
\begin{proof}
This is a direct consequence of Proposition 3.5 in \cite{BvE21}, stating that
\[
a^{\#}(x, o) - a^{\#}(y, o) = a^{\#}(x, y) - \frac{\Gr_{y}(x, o)}{\deg(o)}
\]
for each $\# \in \{\ell,r\}$ and all $x, y \in V$. Indeed, it follows from this identity that
\begin{align*}
M_{o}(n + m) - M_o(n) &= a^r(\sigma^{n + m}(o),o)-a^\ell(\sigma^{n + m}(o),o) - a^r(\sigma^{n}(o),o) + a^\ell(\sigma^{n}(o),o)\\
&= \left[a^r(\sigma^{n + m}(o),\sigma^{n}(o))- \frac{\Gr_{\sigma^n(o)}(\sigma^{n+m}(o), o)}{\deg(o)}\right] \\&\hspace{4cm}- \left[a^\ell(\sigma^{n + m}(o),\sigma^{n}(o)) - \frac{\Gr_{\sigma^n(o)}(\sigma^{n+m}(o), o)}{\deg(o)}\right]\\
&= a^r(\sigma^{n + m}(o),\sigma^{n}(o)) - a^\ell(\sigma^{n + m}(o),\sigma^{n}(o)) =M_{n}(n + m)
\end{align*}
for every $n,m\in \mathbb{Z}$ as claimed.
\end{proof}
Let us also make note of the following key property of this additive cocycle.
\begin{lemma} \label{lem:eventual_positivity} $\mathbb{P}_{\c{S}}$-almost surely, $M_o(n)$ is positive for all sufficiently large positive $n$ and negative for all sufficiently large negative $n$. Moreover, \[ M_o(n) \sim a^r(\sigma^n(o), o) = r_{\sigma^n(o), o}(\sigma^n(o)) \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^n(o)) \] $\mathbb{P}_{\c{S}}$-almost surely as $n\to\infty$. \end{lemma}
\begin{proof} This follows immediately from \eqref{eq:identifying_ends} and the definition of $M_o(n)$. \end{proof}
We will deduce Proposition~\ref{P:two-ends->linear} from Lemma~\ref{lem:eventual_positivity} together with the following general fact about stationary sequences.
\begin{proposition} \label{prop:ergodic} Let $(Z_i)_{i\in \mathbb{Z}}$ be a stationary sequence of real-valued random variables and suppose that $\sum_{i=0}^n Z_{-i} >0$ for all sufficiently large $n$ almost surely. Then $\limsup_{n\to\infty} \frac{1}{n}\sum_{i=0}^n Z_i>0$ almost surely. \end{proposition}
\begin{proof} For each $n\in \mathbb{Z}$ let $R_n = \inf \{m \geq 0: \sum_{i=n}^{n+m}Z_i>0 \}$, so that $R_n=0$ whenever $Z_n>0$ and $(R_n)_{n\in \mathbb{Z}}$ is a stationary sequence of $\{0,1,\ldots\}$-valued random variables. It follows from the definitions that if $n \leq m$ then either $n+R_n < m$ or $n+R_n \geq m+R_m$, so that the intervals $[n,n+R_n]$ and $[m,m+R_m]$ are either disjoint or ordered by inclusion. On the other hand, we have by stationarity and the hypotheses of the Proposition that for each $n\in \mathbb{Z}$ there almost surely exists $N_n<\infty$ such that $\sum_{i=n-m}^{n-1} Z_{i} >0$ for every $m \geq N_n$ and hence that $R_{n-m}+(n-m) < n$ for every $m \geq N_n$, so that each $n\in \mathbb{Z}$ is contained in at most finitely many of the intervals $[m,m+R_m]$ almost surely. Using the fact that these intervals are either disjoint or ordered by inclusion, it follows that there is a unique decomposition of $\mathbb{Z}$ into maximal intervals of this form \[ \mathbb{Z} = \bigcup\Bigl\{[k,k+R_k] : k \in \mathbb{Z}, [k,k+R_k] \nsubseteq [m,m+R_m] \text{ for every $m\in \mathbb{Z} \setminus \{k\}$}\Bigr\}. \] Thus, if we define $Y_n$ by \[ Y_n = \begin{cases}\sum_{i=k}^{n} Z_i & n = k+R_k \text{ for some $k\in \mathbb{Z}$ such that $[k,k+R_k]$ maximal} \\ 0 & \text{ otherwise} \end{cases} \] then $(Y_n)_{n\in \mathbb{Z}}$ is a stationary sequence of non-negative random variables such that $Y_n$ is positive whenever $n$ is the right endpoint of a maximal interval. Since $Y_n$ is non-negative and the set of $n$ such that $Y_n \neq 0$ is almost surely non-empty, it follows from the ergodic theorem applied to $(\min\{Y_n,1\})_{n\in\mathbb{Z}}$ that \[ \liminf_{n\to\infty} \frac{1}{n}\sum_{i=0}^n Y_n >0 \] almost surely. The claim follows since if $-m$ is the left endpoint of the maximal interval containing $0$ then \[\sum_{i=0}^n Y_n = \sum_{i=-m}^n Z_i\] for every $n$ that is the right endpoint of some maximal interval.
\end{proof}
\begin{proof} It follows from Lemma~\ref{lem:stationary} that $(M_n(n+1))_{n\in \mathbb{Z}}$ is a stationary sequence under $\mathbb{P}_{\c{S}}$ and from Lemma~\ref{lem:cocycle} that $M_o(n)=\sum_{i=0}^{n-1}M_i(i+1)$ for every $n\geq 0$ and $M_o(-n)=\sum_{i=-n}^{-1}M_i(i+1)$ for every $n\leq 0$. Thus, Lemma~\ref{lem:eventual_positivity} implies that the stationary sequence $(M_n(n+1))_{n\in \mathbb{Z}}$ satisfies the hypotheses of Proposition~\ref{prop:ergodic} and hence that \[ \limsup_{n\to\infty} \frac{M_o(n)}{n} >0 \] almost surely. On the other hand, the subadditive ergodic theorem implies that the limit $\lim_{n\to\infty}\frac{1}{n} \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^n(o))$ exists
$\mathbb{P}_{\mathcal{S}}$-a.s., and since \[ M_o(n)=\left(r_{\sigma^n(o), o}(\sigma^n(o)) -\ell_{\sigma^n(o), o}(\sigma^n(o)) \right)\mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^n(o)) \leq \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^n(o)) \] we must have that \[\lim_{n\to\infty}\frac{1}{n} \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^n(o))>0\] $\mathbb{P}_{\mathcal{S}}$-a.s.\ as claimed. The fact that the negative-$n$ limit $\lim_{n\to\infty}\frac{1}{n} \mathcal{R}_{\mathrm{eff}}(o \leftrightarrow \sigma^{-n}(o))$ also exists and is equal to the positive-$n$ limit a.s.\ follows from the subadditive ergodic theorem. \end{proof}
\subsection{Completing the proof}
We now complete the proof of the main theorem.
\begin{proof}[Proof of Theorem~\ref{T:main}] It suffices by Proposition~\ref{prop:two_ended=linear_growth} to prove that if $(G,o)$ is a recurrent unimodular random rooted graph whose UST is two-ended almost surely then $G$ has linear volume growth almost surely. As before, we write $\c{S}$ for the spine of the oriented UST $T^\rightarrow$, write $\mathbb{P}_{\c{S}}$ for the conditional law of $(G,T^\rightarrow,o)$ given that $o\in \c{S}$, and write $\sigma$ for the shift along the spine as in Lemma~\ref{lem:stationary}.
For each $x\in V$ let $S(x)$ be an element of $\c{S}$ of minimal graph distance to $x$, choosing one of the finitely many possibilities uniformly and independently at random for each $x$ where this point is not unique.
Letting $S^{-1}(v)=\{x\in V:S(x)=v\}$ for each $v\in \c{S}$, we have by the mass-transport principle that
\begin{align*}
\mathbb{E}_{\c{S}}|S^{-1}(o)| = \mathbb{E}\bigl[ |S^{-1}(o)| \mid o \in \c{S}\bigr] &= \mathbb{P}(o\in \c{S})^{-1}\mathbb{E}\left[ \sum_{x\in V} \mathds{1}(o=\c{S}(x))\right] \\&= \mathbb{P}(o\in \c{S})^{-1}\mathbb{E}\left[ \sum_{x\in V} \mathds{1}(x=\c{S}(o))\right]=\mathbb{P}(o\in \c{S})^{-1}<\infty.
\end{align*}
We thus have a stationary sequence of random variables $(|S^{-1}(\sigma^i(o))|)_{i \in \mathbb{Z}}$ with uniformly finite mean, and the ergodic theorem implies that
\begin{equation}
\label{eq:ergodic_theorem_S-1}
\lim_{i\to \infty} \frac{1}{2n}\sum_{i=-n}^n |S^{-1}(\sigma^i(o))| <\infty
\end{equation}
almost surely. On the other hand, letting $B(o,r)$ be the graph distance ball of radius $r$ around $o$ for each $r\geq 1$, we have by definition of $S^{-1}$ that
\begin{equation}
\label{eq:B_containment} B(o,r) \subseteq \bigcup \left\{S^{-1}\left(\sigma^n(o)\right):n\in \mathbb{Z}, d(o,\sigma^n(o))\leq 2r\right\}
\end{equation}
for each $r\geq 1$. Proposition~\ref{P:two-ends->linear} together with the trivial inequality $\mathcal{R}_{\mathrm{eff}}(x\leftrightarrow y)\leq d(x,y)$ imply that there exists a positive constant $c>0$ such that $d(o,\sigma^n(o))\geq c|n|$ for all sufficiently large (positive or negative) $n$ almost surely, and together with \eqref{eq:ergodic_theorem_S-1} and \eqref{eq:B_containment} this implies that $\limsup_{r\to \infty} \frac{1}{r}|B(o,r)|<\infty$ almost surely. This completes the proof. \end{proof}
\appendix
\section{Uniqueness of the potential kernel implies one-endedness of the UST, without finite expected degree}
\label{Appendix:BvE}
In this appendix we prove Theorem~\ref{T:harm and UST_no_degree_assumption}, which generalizes the theorem of Berestycki and the first author concerning the equivalence of the UST being one-ended and uniqueness of the harmonic measure from infinity to the case that the unimodular random rooted graph does not necessarily have finite expected degree. A secondary purpose of this appendix is to give a brief and self-contained account of those results of \cite{BvE21} that are needed for our main results. Since recurrent graphs whose USTs are one-ended always have unique harmonic measure from infinity \cite[Theorem 14.2]{BLPS}, it suffices to prove that the converse holds under the additional assumption of unimodularity. Moreover, it suffices as usual to consider the case that $(G,o)$ is ergodic.
Suppose that $(G,o)$ is an ergodic recurrent unimodular random rooted graph for which $\c{H}$ is a singleton almost surely. We write $h$ for the unique element of $\c{H}$ and $a$ for the corresponding potential kernel. For each $c>0$ consider the event $A_c=\{\limsup_{x\to \infty}h_{x,o}(x)\geq c\}=\{$for each $\varepsilon>0$ there exist infinitely many vertices $x$ with $h_{x,o}(x)\geq c-\varepsilon\}$. As explained in detail in \cite[Lemma 5.3]{BvE21} (which concerns deterministic recurrent graphs), we have that \begin{equation} \label{eq:h_asymptotic} h_{x,o}(x)\sim h_{x,w}(x) \qquad \text{ as $x\to \infty$ for each fixed $w\in V$}, \end{equation} which
implies that $A_c$ is re-rooting invariant. Since $(G,o)$ was assumed to be ergodic we deduce the following.
\begin{lemma} Let $(G,o)$ be an ergodic unimodular random rooted graph. If $G$ is almost surely recurrent with a uniquely defined harmonic measure from infinity then the event $A_c$ has probability $0$ or $1$ for each $c\in (0,1)$. \end{lemma}
The next lemma is proven in \cite{BvE21} using an argument that relies on reversibility (and hence on the assumption $\mathbb{E} \deg(o)<\infty$). We give an alternative proof using F{\o}lner sequences that works without this assumption.
\begin{lemma} \label{lem:A_1/2} Let $(G,o)$ be an ergodic unimodular random rooted graph. If $G$ is almost surely recurrent with a uniquely defined harmonic measure from infinity then the event $A_{1/2}$ holds almost surely. \end{lemma}
\begin{proof} It suffices to prove that $A_{c}$ holds with positive probability for every $c<1/2$. Since $(G,o)$ is recurrent, it follows from the results of \cite[\S 8]{AldousLyonsUnimod2007} that $(G,o)$ is \emph{hyperfinite}, meaning that there exists a sequence of random subsets $(\omega_n)_{n\geq 1}$ of $E$ such that \begin{enumerate}
\item Every component of the subgraph spanned by $\omega_n$ is finite almost surely for each $n\geq 1$.
\item $\omega_n \subseteq \omega_{n+1}$ for each $n\geq 1$ and $\bigcup_{n\geq 1} \omega_n = E$.
\item The random rooted edge-labelled graph $(G,o,(\omega_n)_{n\geq 1})$ is unimodular. \end{enumerate} Let $n\geq 1$ and let $K_n$ be the component of $o$ in $\omega_n$. Then we have by the mass-transport principle that \[
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(h_{x,o}(x)\geq \frac{1}{2}\right)\right] =
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(h_{x,o}(o)\geq \frac{1}{2}\right)\right], \] and since the sum of the two sides is at least $1$ it follows that \[
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(h_{x,o}(x)\geq \frac{1}{2}\right)\right] \geq \frac{1}{2} \] and hence by Markov's inequality that \[
\mathbb{P}\left(\left|\{x\in K_n : h_{x,o}(x)\geq \tfrac{1}{2}\}\right| \geq \tfrac{1}{4}|K_n| \right) \geq 1- \frac{4}{3} \mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(h_{x,o}(x)< \frac{1}{2}\right)\right] \geq \frac{1}{3}. \]
Since $|K_n|\to \infty$ almost surely as $n\to\infty$, it follows from this and Fatou's lemma that \[
\mathbb{P}(A_{1/2})\geq \mathbb{P}\left(\left|\{x\in K_n : h_{x,o}(x)\geq \tfrac{1}{2}\}\right| \geq \tfrac{1}{4}|K_n| \text{ for infinitely many $n$}\right) \geq \frac{1}{3} \] and hence by ergodicity that $\mathbb{P}(A_{1/2})=1$ as claimed. \end{proof}
\begin{lemma} \label{lem:hit_sets_with_h_lower_bound} Let $G=(V,E)$ be an infinite, connected, locally finite recurrent graph with uniquely defined harmonic measure from infinity $h$, let $o\in V$ and let $a$ be the associated potential kernel. If $A$ is any infinite set of vertices with $\inf_{x\in A}h_{x,o}(x)>0$, the Doob-transformed walk $\wh{X}$ visits $A$ infinitely often almost surely. \end{lemma}
\begin{proof} We have by \eqref{eq:hitting prop CRW} that $\wh{\mathbf{P}}_o(\wh{X}$ hits $x)=h_{o,x}(x)$ for every $x\in V$, and it follows by Fatou's lemma that $\wh{\mathbf{P}}($hit $A$ infinitely often$)\geq \inf_{x\in A}h_{x,o}(x)>0$. On the other hand, we have by Theorem~\ref{thm:Liouville} and the assumption that $h$ is unique that $\wh{X}$ has trivial tail $\sigma$-algebra, so that $\wh{\mathbf{P}}($hit $A$ infinitely often$)=1$ as claimed. \end{proof}
\begin{proposition} Let $G=(V,E)$ be an infinite, connected, locally finite recurrent graph with uniquely defined harmonic measure from infinity $h$, let $o\in V$, let $a$ be the associated potential kernel, and suppose that $\liminf_{x\to\infty} h_{x,o}(x)>0$. If $\wh{X}$ and $\wh{Y}$ are independent copies of the Doob-transformed walk started at some vertices $x$ and $y$, then $\{\wh{X}_n:n\geq 0\}\cap \{\wh{Y}_n:n \geq 0\}$ is infinite almost surely. \end{proposition}
\begin{proof} Let $\delta>0$ be such that $A=\{x\in V: h_{x,o}(x)\geq \delta\}$ is infinite. Applying Lemma~\ref{lem:hit_sets_with_h_lower_bound} yields that $\wh{X}\cap A$ is infinite almost surely, and applying Lemma~\ref{lem:hit_sets_with_h_lower_bound} a second time yields that $\wh{Y}\cap \wh{X} \cap A$ is infinite almost surely. \end{proof}
Applying this proposition together with the results of \cite{Lyons_2003}, which imply that an independent Markov process and loop-erased Markov process intersect infinitely almost surely whenever the corresponding two independent Markov processes do, we deduce the following immediate corollary.
\begin{corollary} \label{cor:intersection_property2} Let $G=(V,E)$ be an infinite, connected, locally finite recurrent graph with uniquely defined harmonic measure from infinity $h$, let $o\in V$, let $a$ be the associated potential kernel, and suppose that $\liminf_{x\to\infty} h_{x,o}(x)>0$. If $\wh{X}$ and $\wh{Y}$ are independent copies of the Doob-transformed walk started at some vertices $x$ and $y$, then $\{\wh{X}_n:n\geq 0\}\cap \{\mathrm{LE}(\wh{Y})_n:n \geq 0\}$ is infinite almost surely. \end{corollary}
\begin{proposition} \label{prop:tip_escape} Let $G=(V,E)$ be an infinite, connected, locally finite recurrent graph with uniquely defined harmonic measure from infinity $h$, let $o\in V$, let $a$ be the associated potential kernel, and suppose that $\liminf_{x\to\infty} h_{x,o}(x)>0$. For each $x\in V$, let $X$ be a random walk started at $x$ and let $\wh{Y}$ be a Doob-transformed walk started at $o$. Then \[ \lim_{x\to \infty}\mathbb{P}\left(\{X_n:0\leq n \leq T_o\} \cap \{\mathrm{LE}(\wh{Y})_m: m \geq 0\}=\{o\}\right)=0. \] \end{proposition}
\begin{proof} As $x\to \infty$, the law of the time-reversed final segment $(X_{T_o},X_{T_o-1},\ldots,X_{T_o-k})$ converges to that of $(\wh{X}_0,\ldots,\wh{X}_k)$ for each $k\geq 1$, and the claim follows from Corollary~\ref{cor:intersection_property2}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T:harm and UST_no_degree_assumption}] The fact that $G$ has a unique harmonic measure from infinity means that we can endow the uniform spanning tree of $G$ with an orientation in a canonical way: Suppose that we exhuast $V$ by finite sets $V=\bigcup V_n$ and let $G_n^*$ be defined by contracting $V\setminus V_n$ into a single boundary vertex $\partial_n$, so that the UST of $G$ can be expressed as the weak limit of the USTs of the graphs $G_n^*$. If for each $n\geq 1$ we orient the UST of $G_n^*$ towards the boundary vertex $\partial_n$ to obtain an oriented tree $T_n^\rightarrow$, then the uniqueness of the harmonic measure from infinity on $G$ implies that the law of $T_n^\rightarrow$ converges weakly to the law of an oriented spanning tree $T^\rightarrow$ of $G$, which can be thought of as a canonical (but potentially random) orientation of the UST of $G$. Indeed, if we fix an enumeration $v_1,v_2,\ldots$ of $V$ with $v_1=o$ we can sample $T_n^\rightarrow$ using Wilson's algorithm rooted at $\partial_n$, starting with the vertices in the order they appear in the enumeration of $V$, and orienting the edges of the tree in the direction they are crossed by the loop-erased walk that contributed them to the tree. In the infinite-volume limit (since only the part of the first walk after its final visit to $o$ contributes to its loop erasure), this corresponds to doing Wilson's algorithm where the first walk started at $o$ is Doob-transformed and the remaining walks are ordinary simply random walks.
An important consequence of this discussion is that if we sample the oriented uniform spanning tree using Wilson's algorithm rooted at infinity, where the first random walk is a Doob-transformed walk started at $o$ and the remaining walks are ordinary simple random walks, the distribution of the resulting oriented tree $T^\rightarrow$ does not depend on the choice of the root vertex $o$, since it is given by the limit of the USTs of $G_n^*$ oriented towards $\partial_n$ independently of the choice of exhaustion. Given the oriented tree $T^\rightarrow$, we say that a vertex $u$ is in the \textbf{future} of a vertex $v$ if the unique infinite oriented path emanating from $v$ passes through $v$, and say that $u$ is in the \textbf{past} of $v$ if $v$ is in the future of $u$.
Let $(\omega_n)_{n\geq1}$ be a sequence witnessing the fact that $(G,o)$ is hyperfinite as in the proof of Lemma~\ref{lem:A_1/2} and let $K_n$ be the cluster of $o$ in $\omega_n$ for each $n\geq 1$. We have by the mass-transport principle that \[
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(\text{$x$ in past of $o$}\right)\right] =
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(\text{$x$ in future of $o$}\right)\right]. \] On the other hand, letting $\c{S}$ be the set of vertices belonging to a doubly infinite path in $T$, we also have that \[
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(\text{$x$ in past or future of $o$}\right)\right] \geq \mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n} \mathds{1}(\text{$o,x\in \c{S}$})\right] \] and we can use the mass-transport principle again to bound \begin{align*}
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n} \mathds{1}(o,x\in \c{S})\right] &=\mathbb{E}\left[\frac{1}{|K_n|^2}\sum_{x,y\in K_n} \mathds{1}(o,x\in \c{S})\right]=\mathbb{E}\left[\frac{1}{|K_n|^2}\sum_{x,y\in K_n} \mathds{1}(x,y\in\c{S})\right]
\\&= \mathbb{E}\left[\left(\frac{|K_n \cap \c{S}|}{|K_n|}\right)^2\right] \geq \mathbb{E}\left[\frac{|K_n \cap \c{S}|}{|K_n|}\right]^2 = \mathbb{P}(o\in \c{S})^2. \end{align*} Putting these two estimates together, it follows that \begin{equation} \label{eq:past_and_S}
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(\text{$x$ in past of $o$}\right)\right] \geq \frac{1}{2}\mathbb{P}(o\in \c{S})^2. \end{equation} On the other hand, if we sample $T^\rightarrow$ using Wilson's algorithm rooted at infinity, starting with a Doob-transformed $\wh{Y}$ started at $o$ followed by an ordinary random walk $X$ started at $x$, the vertex $x$ belongs to the past of $o$ if and only if the walk $X$ first hits the loop-erasure of $\wh{Y}$ at the vertex $o$. Proposition~\ref{prop:tip_escape} implies that this probability tends to zero as $x\to\infty$,and it follows by bounded convergence that \begin{equation} \label{eq:density_zero_past}
\mathbb{E}\left[\frac{1}{|K_n|}\sum_{x\in K_n}\mathds{1}\left(\text{$x$ in past of $o$}\right)\right]\to 0 \end{equation} as $n\to\infty$. Putting together \eqref{eq:past_and_S} and \eqref{eq:density_zero_past} yields that $\mathbb{P}(o\in \c{S})=0$. Since ``everything that can happen somewhere can happen at the root" \cite[Lemma 2.3]{AldousLyonsUnimod2007}, it follows that $\c{S}=\emptyset$ almost surely and hence that $T$ is one-ended almost surely as claimed. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{ Generalized Jacobi and Gauss-Seidel method for solving non-square linear systems
}
\author{ Manideepa Saha\thanks{Department of Mathematics, National Institute of Technology Meghalaya, Shillong 793003, India ([email protected]).}
}
\pagestyle{myheadings} \markboth{M.\ Saha
}{Generalized Jacobi and Gauss-Seidel method } \maketitle
\begin{abstract} The main goal of this paper is to generalize Jacobi and Gauss-Seidel methods for solving non-square linear system. Towards this goal, we present iterative procedures to obtain an approximate solution for non-square linear system. We derive sufficient conditions for the convergence of such iterative methods. Procedure is given to show that how an exact solution can be obtained from these methods. Lastly, an example is considered to compare these methods with other available method(s) for the same.
\end{abstract}
\begin{keywords} Iterative method, Jacobi, Gauss-Seidel, convergence. \end{keywords} \begin{AMS}15A06,~65F15,~65F20,~65F50
\end{AMS}
\section{Introduction} \label{intro3} The idea of solving large square systems of linear equations by iterative methods is certainly not new, dating back at least to Gauss [1823]. Jacobi and Gauss-Seidel methods are most stationary iterative methods that date to the late eighteenth century, but they find current application in problems where the matrix is sparse. Both these methods are designed for finding an approximate solution to square linear systems. The purpose of this paper is to provide an iterative procedure for solving a non-square linear system. In particular, we generalize Jacobi and Gauss-Seidel methods for non-square linear system.
Very recently, in \cite{WheA17}, authors considered non-square linear system with number of variables is more than that of equations and described a new iterative procedure along with a convergence analysis. They also discussed some of its applications. Motivated by their work, and using similar technique, we generalize Jacobi and Gauss-Seidel procedure for solving non-square linear system with number of variables is more than that of equations. More specifically, we apply Jacobi or Gauss-Seidel iterative method for the square part of the system and apply the iterative method described in \cite{WheA17} for the non-square part of the system to obtain an approximate solution of the system. We also derive sufficient conditions for the convergence of such procedure. Finally, we provide a procedure to obtain an exact solution of the system . A numerical illustration has been given for the same, and to compare the procedure with the procedure available in~\cite{WheA17}.
\section{Preliminaries} \label{convention} Consider the linear system $Ax=b$, where $A\in\mathbb{R}^{m,n}$, and $b\in\mathbb{R}^{m}$ with $m<n$. We now state iterative procedure described in \cite{WheA17}. If $z^{(k)}$ is the approximate solution obtained after $k$th iteration, then \begin{equation}\label{eqnWA1} z^{(k+1)}=z^{(k)}+s(A)d^{k}_{A} \end{equation}
where $s(A)$ is the $n\times m$ matrix with $1,0,-1$ as its elements, and they indicate the signs of elements of $A^{T}$, and $d^{(k)}_{A}$ is the column $m$-vector of weighted residuals, that is, $d^{k}_{A}=\left[\frac{b_{1}-A_{1}z^{(k)}}{m\|A_{1}\|_{1}},\frac{b_{2}-A_{2}z^{(k)}}{m\|A_{2}\|_{1}},\ldots,\frac{b_{1}-A_{m}z^{(k)}}{m\|A_{m}\|_{1}}\right]^{T}$, where $A_{i}$ denotes the $i$th row vector of $A$ and $\|.\|_{1}$ denotes the $l_{1}$-norm.
\section{Generalized Jacobi and Gauss-Seidel Method} In this section we describe iterative procedures, based on the iterative procedure (\ref{eqnWA1}) and on Jacobi or Gauss-Seidel procedure for square matrix, and called them as generalized Jacobi or generalized Gauss-Seidel method, respectively.
\subsection{Steps for generalized Jacobi and generalized Gauss-Seidel Method}
Our notation will be similar to that of used in the previous section. As $m>n$, we can partition $A$ as $A=[B~ \tilde{B}]$ with $B\in\mathbb{R}^{m,m}$, and $\tilde{B}\in\mathbb{R}^{m,n-m}$. Analogously, partition the initial approximation $x^{(0)}=\left[\begin{array}{l}
x^{(0)}_{1}\\x^{(0)}_{2}
\end{array}\right]$, conformably with $A$. Let $\epsilon$ be a given threshold. Followings are the steps of the procedures.
If $\|Ax^{(0)}-b\|<\epsilon$, then initial guess $x^{(0)}$ will be an approximate solution of the system $Ax=b$. Otherwise, following steps will be executed:
\begin{description}
\item[Step 1 :] Set iteration count $k\leftarrow 0$. Calculate $\tilde{b}^{(k)}=b-Bx^{(k)}_{1}$. Apply iteration procedure proposed in~\cite{WheA17} to the system $\tilde{B}y=\tilde{b}^{(k)}$ with initial guess $x^{(k)}_{2}$ to get $x^{(k+1)}_{2}$, that is,
\[x^{(k+1)}_{2}\leftarrow x^{(k)}_{2}+s(\tilde{B}) d^{(k)}\]
where $s(\tilde{B})$ is an $n-m\times m$ matrix, elements of which represents the signs of elements of $\tilde{B}^T$, and $d^{(k)}$ is an $m$-vector whose $i$th entry is $d^{(k)}_{i}=\frac{\tilde{b}^{(k)}_{i}-\tilde{B}_{i}x^{(k)}_{2}}{m\|\tilde{B}_{i}\|_{1}}$.
\item[Step 2 :] Set $\hat{b}^{(k)}=b-\tilde{B}x^{(k+1)}_{2}$. Apply Jacobi or, Gauss-Seidel method to the square linear system $Bz=\hat{b}^{(k)}$ with initial guess $x^{(k)}_{1}$ to get $x^{(k+1)}_{1}$, that is,
\[x^{(k+1)}_{1}\leftarrow D^{-1}(-Rx^{(k)}_{1}+\hat{b}^{(k)})~~(\text{ in case Jacobi method})\]
where $D=\diag(B)$, and $R=B\setminus D$,
\[\text{ or, }x^{(k+1)}_{1}\leftarrow L^{-1}(-Rx^{(k)}_{1}+\hat{b}^{(k)})~~(\text{ in case Gauss-Seidel method})\]
where $L=\tril(B)$, and $R=B\setminus L$.
\item[Step 3 :] Set $x^{(k+1)}=\left[\begin{array}{l}
x^{(k+1)}_{1}\\x^{(k+1)}_{2}
\end{array}\right]$, where $x^{(k+1)}_{1}$ is obtained from Step I and $x^{(k+1)}_{2}$ is obtained from Step II.
\item[Step 4 :] Repeat the above steps untill $\|Ax^{(k+1)}-b\|<\epsilon .$
\end{description}
\subsection{Convergence analysis} Assume that the system $Ax=b$ has a solution and let $x^{(k)}=\left[\begin{array}{l}
x^{(k)}_{1}\\x^{(k)}_{2}
\end{array}\right]$ be an approximation solution obtained after $k$th iteration step, where $x^{(k)}_{1}\in\mathbb{R}^{m}$ and $x^{(k)}_{2}\in\mathbb{R}^{n-m}$.
From Step I of the iterative procedure we get, \begin{equation}\label{eqn0} x^{(k+1)}_{2}= x^{(k)}_{2}+s(\tilde{B}) d^{(k)} \end{equation}
If we use Jacobi method in step II, then \begin{equation}\label{eqn1}x^{(k+1)}_{1}=D^{-1}\left(-Rx^{(k)}_{1}+\hat{b}^{(k)}\right)\end{equation}
Note that $d^{(k)}_{i}=\frac{\tilde{b}^{(k)}_{i}-\tilde{B}_{i}x^{(k)}_{2}}{m\|\tilde{B}_{i}\|_{1}}=\frac{b_{i}-B_{i}x^{(k)}_{1}-\tilde{B}_{i}x^{(k)}_{2}}{m\|\tilde{B}_{i}\|_{1}}=\frac{b_{i}-A_{i}x^{(k)}}{m\|\tilde{B}_{i}\|_{1}}$.
If we take $N(\tilde{B})=\diag(\|\tilde{B}_{1}\|_1, \|\tilde{B}_{2}\|_1,\ldots, \|\tilde{B}_{m}\|_1)$, then $d^{(k)}=\frac{1}{m}N(\tilde{B})^{-1}(b-Ax^{(k)})$, if $N(\tilde{B})$ is nonsingular. Hence equation (\ref{eqn1}) can be further reduced to \begin{flalign}\label{eqn2} x^{(k+1)}_{1}=x^{(k)}_{1}-D^{-1}\left(I-\frac{1}{m}\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\right)\left(Ax^{(k)}-b\right) \end{flalign} where * denotes the matrix multiplication. As $Ax^{(k+1)}=Bx^{(k+1)}_{1}+Bx^{(k+1)}_{2}$, equations (\ref{eqn0}) and (\ref{eqn2}) imply that
\begin{flalign} \label{eqn3}
Ax^{(k+1)}-b&= (I-BD^{-1})\left(I-\frac{1}{m}\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\right)\left(Ax^{(k)}-b\right)
\end{flalign}
This shows that if $\|I-BD^{-1}\|<1$ and $\|mI-\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\|<m$, then $x=\lim\limits_{k}x^{(k)}$, if exists, is a solution of the system $Ax=b$.
Assume that $\|I-BD^{-1}\|<1$ and $\|mI-\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\|<m$. We now the existance of $x$. From (\ref{eqn0}) and (\ref{eqn2}) we get {\small\begin{flalign}\label{eqn4}
\|x^{(k+1)}-x^{(k)}\|_1 &= \|x^{(k+1)}_{1}-x^{(k)}_{1}\|_1+\|x^{(k+1)}_{2}-x^{(k)}_{2}\|_1 \nonumber\\
&=\|D^{-1}\left(I-\frac{1}{m}\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\right)(b-Ax^{(k)})\|_1+\|\frac{1}{m}s(\tilde{B})*N(\tilde{B})^{-1}(b-Ax^{(k)})\|_1\nonumber\\
&\leq\left[\|D^{-1}\left(I-\frac{1}{m}\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\right)\|_1+\frac{1}{m}\|s(\tilde{B})*N(\tilde{B})^{-1}\|_1\right]\|Ax^{(k)}-b\|_1
\end{flalign}}
From equations (\ref{eqn2}) and (\ref{eqn4}), it can be easily verified that $\{x^{k}\}$ is a Cauchy-sequence, and hence $x=\lim\limits_{k}x^{(k)}$ exists.
From the above discussion we can coclude that the our proposed generalization of Jacobi method will converge if $\|I-BD^{-1}\|<1$ and $\|mI-\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\|<m$, for any matrix norm $\|.\|$.
Similarly, it can be proved that if we apply Gauss-Seidel method in Step II, the generalized Gauss-Seidel method will converge if $\|I-BL^{-1}\|<1\|$ and $\|mI-\tilde{B}*s(\tilde{B})*N(\tilde{B})^{-1}\|<m$.
\section{Application and Comparison} Obtaining an exact solution of the system $Ax=b$.
Let $[\bar{A}~\bar{b}]$ be the reduced row echelon form of $[A~b]$, and let $\rank(A)=m$. We now apply our procedure on the system $\bar{A}x=\bar{b}$.
As $\rank(A)=m$, $B=I$. Then $x^{(k+1)}_{2}=x^{k}_{2}+s(\tilde{B})d^{(k)}$, and $x^{(k+1)}_{1}=\hat{b}^{(k)}$, because in both procedures $R=0$. Thus $Ix^{(k+1)}_{1}=b-\tilde{B}x^{(k+1)}_{2}$, and hence $\bar{A}x^{(k+1)}=b$. Hence in each iteration we obtain an exact solution of the system.
We now compare our method with the procedure described in \cite{WheA17} with numerical illustration. Consider the system $Ax=b$, with \[[A~b]=\left[\begin{array}{rrrrrrrrrr}
2 & 4& -3 & 1 & 0 &5 &-7 &8 &| &38\\
3 &2 &10 &-4 &-1 &-6 &4 &1 &| &20\\
9 &7 &3 &2 &0 &0 &-4 &2 &| &39\\
6 &4 &0 &-1 &-1 &3 &10 &5 &| &-16\\
5 &2 &-3 &-7 &-5 &4 &8 &-8 &|&-30 \end{array}\right]\] so that \[[\bar{A}~\bar{b}]=\rref([A~b])=\left[\begin{array}{rrrrrrrrrr}
1 & 0& 0 & 0 & 0 &0.2 &5.7 &0.6 &| &-20.5\\
0 &1 &0 &0 &0 &0.6 &-4.2 &2.0 &| &16.2\\
0 &0 &1 &0 &0 &0 &-4 &2 &| &9.1\\
0 &0 &0 &1 &0 &3 &10 &5 &| &41.2\\
0 &0 &0 &0 &1 &4 &8 &-8 &|&-83.1 \end{array}\right]\] Our iterative procedure (generalized Jacobi) method applying to the system $\bar{A}x=\bar{b}$ with initial approximation $x^{0}=[2,~0,~-1,~2,~0,~0,~-3,~1]^{T}$ resulted to a non-basic approximation solution $x=[2.9734,~2.2736,~-4.0025,~-2.9777,~-1.9033,~-1.5452,~-3.9452,~-0.7452]^{T}$. Applying the method in \cite{WheA17} to the system $\bar{A}x=\bar{b}$ with the same initial guess $x^{0}$ gives the resutant approximate solution $y=[2,~0,~-1,~-2,~-1.3015,~-1.3015,~-4.3015,~-0.3015]^{T}$.
The residual obtained for the method in \cite{WheA17} is $\|b-Ay\|_{1}=11.7462$, whereas our method resulting to an exact non-basic solution $x$.
\section{Conclusion} We have described a possible generalization of Jacobi and Gauss-Seidel method to non-square linear system. We also discussed the convergence of the proposed procedures. It is shown that an exact solution of a system $Ax=b$ can be obtained if we apply our methods to its reduced row echelon system, and a comparision has been done with the method discussed in \cite{WheA17}.
We now conclude this paper by listing some ideas that we did not pursue, which however may lead to further progress.
We know that both Jacobi and Gauss-Seidel method converges for diagonally dominant coefficient matrices. One may check if the proposed methods converges for row diagonally coefficient matrices.
As SOR method (Successive overrelaxation) is also one of the most stationary iterative method for solving linear square system, one may use a similar technique to generalize SOR method, and check its convergence criteria.
\end{document} | arXiv |
On the forward dynamical behavior of nonautonomous systems
Ezio Di Costanzo 1, , Marta Menci 2, , Eleonora Messina 3, , Roberto Natalini 1, and Antonia Vecchio 4,
Istituto per le Applicazioni del Calcolo "M. Picone", – Consiglio Nazionale delle Ricerche, Via dei Taurini 19 00185 Rome, Italy
Università Campus Bio-Medico di Roma, Via Àlvaro del Portillo 00128 Rome, Italy
Dipartimento di Matematica e Applicazioni "R. Caccioppoli", Università degli Studi di Napoli "Federico Ⅱ", Via Cintia 80126 Naples, Italy
Istituto per le Applicazioni del Calcolo "M. Picone", – Consiglio Nazionale delle Ricerche, Via Pietro Castellino 111 80131 Naples, Italy
Received August 2018 Revised March 2019 Published September 2019
In this paper we propose and study a hybrid discrete–continuous mathematical model of collective motion under alignment and chemotaxis effect. Starting from paper [23], in which the Cucker-Smale model [22] was coupled with other cell mechanisms, to describe the cell migration and self-organization in the zebrafish lateral line primordium, we introduce a simplified model in which the coupling between an alignment and chemotaxis mechanism acts on a system of interacting particles. In particular we rely on a hybrid description in which the agents are discrete entities, while the chemoattractant is considered as a continuous signal. The proposed model is then studied both from an analytical and a numerical point of view. From the analytic point of view we prove, globally in time, existence and uniqueness of the solution. Then, the asymptotic behaviour of a linearised version of the system is investigated. Through a suitable Lyapunov functional we show that for t → +∞, the migrating aggregate exponentially converges to a state in which all the particles have a same position with zero velocity. Finally, we present a comparison between the analytical findings and some numerical results, concerning the behaviour of the full nonlinear system.
Keywords: Differential equations, existence and uniqueness of solution, asymptotic stability, Lyapunov function, collective motion, Cucker-Smale model, flocking behaviour, chemotaxis, self-organization, finite differences.
Mathematics Subject Classification: 45J05, 92B05, 37N25, 92C17.
Citation: Ezio Di Costanzo, Marta Menci, Eleonora Messina, Roberto Natalini, Antonia Vecchio. A hybrid model of collective motion of discrete particles under alignment and continuum chemotaxis. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 443-472. doi: 10.3934/dcdsb.2019189
G. Albi and L. Pareschi, Modeling self-organized systems interacting with few individuals: From microscopic to macroscopic dynamics, Appl Math Lett, 26 (2013), 397-401. doi: 10.1016/j.aml.2012.10.011. Google Scholar
I. Aoki, A simulation study on the schooling mechanism in fish, Bullettin Of The Japanese Society Scientific Fischeries, 48 (1982), 1081-1088. doi: 10.2331/suisan.48.1081. Google Scholar
Y. Arboleda-Estudillo, M. Krieg, J. Stühmer, N. A. Licata, D. J. Muller and C.-P. Heisenberg, Movement Directionality in Collective Migration of Germ Layer Progenitors, Curr Biology, 20 (2010), 161-169. doi: 10.1016/j.cub.2009.11.036. Google Scholar
M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, V. Lecomte, A. Orlandi, G. Parisi, A. Procaccini, M. Viale and V. Zdravkovic, Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study, P Natl Acad Sci USA, 105 (2008), 1232-1237. doi: 10.1073/pnas.0711437105. Google Scholar
J. M. Belmonte, G. L. Thomas, L. G. Brunnet, R. M. de Almeida and H. Chaté, Self-propelled particle model for cell-sorting phenomena, Phys Rev Lett, 100 (2008), 248702. doi: 10.1103/PhysRevLett.100.248702. Google Scholar
[6] H. Brunner, Collocation Methods for Volterra Integral and Related Functional Differential Equations, Cambridge University Press, Cambridge, UK, 2004. doi: 10.1017/CBO9780511543234. Google Scholar
L. Bruno, A. Tosin, P. Tricerri and F. Venuti, Non-local first-order modelling of crowd dynamics: A multidimensional framework with applications, Applied Mathematical Modelling, 35 (2011), 426-445. doi: 10.1016/j.apm.2010.07.007. Google Scholar
T. A. Burton, Volterra Integral and Differential Equations. Second Edition, Springer, 2005. Google Scholar
J. A. Carrillo, M. Fornasier, J. Rosado and G. Toscani, Asymptotic Flocking Dynamics for the Kinetic Cucker-Smale Model, SIAM J Math Anal, 42 (2010), 218-236. doi: 10.1137/090757290. Google Scholar
T. Colin, M.-C. Durrieu, J. Joie, Y. Lei, Y. Mammeri, C. Poignard and O. Saut, Modeling of the migration of endothelial cells on bioactive micropatterned polymers, Math. BioSci. and Eng., 10 (2013), 997-1015. doi: 10.3934/mbe.2013.10.997. Google Scholar
A. Colombi, M. Scianna and A. Tosin, Differentiated cell behavior: A multiscale approach using measure theory, J Math Biol, 71 (2015), 1049-1079. doi: 10.1007/s00285-014-0846-z. Google Scholar
I. D. Couzin, J. Krause, N. R. Franks and S. A. Levin, Effective leadership and decision-making in animal groups on the move, Nature, 433 (2005), 513-516. doi: 10.1038/nature03236. Google Scholar
I. D. Couzin, J. Krause, R. James, G. D. Ruxton and N. R. Franks, Collective memory and spatial sorting in animal groups, J Theor Biol, 218 (2002), 1-11. doi: 10.1006/jtbi.2002.3065. Google Scholar
E. Cristiani, P. Frasca and B. Piccoli, Effects of anisotropic interactions on the structure of animal groups, J. Math. Biol., 62 (2011), 569-588. doi: 10.1007/s00285-010-0347-7. Google Scholar
E. Cristiani, B. Piccoli and A. Tosin, Modeling self-organization in pedestrians and animal groups from macroscopic and microscopic viewpoints, in Mathematical Modeling of Collective Behavior in Socio-economic and Life-sciences (eds. G. Naldi, L. Pareschi and G. Toscani), Modeling and Simulation in Science, Engineering, and Technology, Birkhäuser Boston, 2010,337–364. doi: 10.1007/978-0-8176-4946-3_13. Google Scholar
E. Cristiani, B. Piccoli and A. Tosin, Multiscale Modeling of Pedestrian Dynamics, vol. 12 of MS & A: Modeling, Simulation and Applications, Springer International Publishing, 2014. doi: 10.1007/978-3-319-06620-2. Google Scholar
E. Cristiani, F. S. Priuli and A. Tosin, Modeling rationality to control self-organization of crowds: an environmental approach, SIAM J Appl Math, 75 (2015), 605-629. doi: 10.1137/140962413. Google Scholar
F. Cucker and J.-G. Dong, Avoiding collisions in flocks, IEEE Transactions on Automatic Control, 55 (2010), 1238-1243. doi: 10.1109/TAC.2010.2042355. Google Scholar
F. Cucker and J.-G. Dong, A General Collision-Avoiding Flocking Framework, IEEE Transactions on Automatic Control, 56 (2011), 1124-1129. doi: 10.1109/TAC.2011.2107113. Google Scholar
F. Cucker and C. Huepe, Flocking with informed agents, MathematicS In Action, 1 (2008), 1-25. doi: 10.5802/msia.1. Google Scholar
F. Cucker and E. Mordecki, Flocking in noisy environments, J. Math. Pures Appl., 89 (2008), 278-296. doi: 10.1016/j.matpur.2007.12.002. Google Scholar
F. Cucker and S. Smale, Emergent Behavior in Flocks, Ieee T Automat Contr, 52 (2007), 852-862. doi: 10.1109/TAC.2007.895842. Google Scholar
E. Di Costanzo, R. Natalini and L. Preziosi, A hybrid mathematical model for self-organizing cell migration in the zebrafish lateral line, J of Math Biol, 71 (2015), 171-214. doi: 10.1007/s00285-014-0812-9. Google Scholar
E. Di Costanzo, A. Giacomello, E. Messina, R. Natalini, G. Pontrelli, F. Rossi, R. Smits and M. Twarogowska, A discrete in continuous mathematical model of cardiac progenitor cells formation and growth as spheroid clusters (cardiospheres), Mathematical Medicine and Biology: A Journal of the IMA, 35 (2018), 121-144. doi: 10.1093/imammb/dqw022. Google Scholar
E. Di Costanzo, R. Natalini and L. Preziosi, A hybrid model of cell migration in zebrafish embryogenesis, in ITM Web of Conferences, EDP Sciences, 5 (2015), 00013. doi: 10.1051/itmconf/20150500013. Google Scholar
M. R. D'Orsogna, Y. L. Chuang, A. L. Bertozzi and L. S. Chayes, Self-propelled particles with soft-core interactions: Patterns, stability, and collapse, Phys Rev Lett, 96 (2016), 104302. doi: 10.1103/PhysRevLett.96.104302. Google Scholar
[27] M. Eisenbach and J. W. Lengeler, Chemotaxis, Imperial College Press, 2004. Google Scholar
J. J. Faria, J. R. G. Dyer, C. R. Tosh and J. Krause, Leadership and social information use in human crowds, Animal Behaviour, 79 (2010), 895-901. doi: 10.1016/j.anbehav.2009.12.039. Google Scholar
F. E. Fish, Kinematics of ducklings swimming in formation: consequences of position, Journal of Experimental Zoology, 273 (1995), 1-11. doi: 10.1002/jez.1402730102. Google Scholar
A. Friedman, Partial Differential Equations of Parabolic Type, Prentice-Hall, Inc., Englewood Cliffs, N.J. 1964. Google Scholar
G. Grégoire and H. Chaté, Onset of collective and cohesive motion, Phys. Rev. Lett., 92. Google Scholar
G. Grégoire, H. Chaté and Y. Tu, Moving and staying together without a leader, Physica D, 181 (2013), 157-170. doi: 10.1016/S0167-2789(03)00102-7. Google Scholar
S. Y. Ha, K. Lee and D. Levy, Emergence of time-asymptotic flocking in a stochastic Cucker-Smale system, Comm. Math. Sci., 7 (2009), 453-469. doi: 10.4310/CMS.2009.v7.n2.a9. Google Scholar
S. Y. Ha and D. Levy, Particle, kinetic and fluid models for phototaxis, Discrete and Continuous Dynamical Systems - Series B, 12 (2009), 77-108. doi: 10.3934/dcdsb.2009.12.77. Google Scholar
S.-Y. Ha and J.-G. Liu, A simple proof of the Cucker-Smale flocking dynamics and mean-field limit, Commun Math Sci, 7 (2009), 297-325. doi: 10.4310/CMS.2009.v7.n2.a2. Google Scholar
H. Hatzikirou and A. Deutsch, Collective guidance of collective cell migration, Curr. Top. Dev. Biol., 81 (2007), 401-434. Google Scholar
D. Helbing, F. Schweitzer, J. Keltsch and P. Molnár, Active walker model for the formation of human and animal trail systems, Physical Review, 56 (1997), 2527-2539. doi: 10.1103/PhysRevE.56.2527. Google Scholar
C. K. Hemelrijk and H. Hildenbrandt, Self-organized shape and frontal density of fish schools, Ethology, 114 (2008), 245-254. doi: 10.1111/j.1439-0310.2007.01459.x. Google Scholar
W. Hundsdorfer and J. G. Verwer, Numerical Solution of Time-Dependent Advection-Diffusion-Reaction Equations, Computational Mathematics, Springer, 2003. doi: 10.1007/978-3-662-09017-6. Google Scholar
A. Huth and C. Wissel, The simulation of the movement of fish schools, J Theor Biol, 156 (1992), 365-385. doi: 10.1016/S0022-5193(05)80681-2. Google Scholar
C. C. Ioannou, C. R. Tosh, L. Neville and J. Krause, The confusion effect. from neural networks to reduced predation risk, Behavioral Ecology, 19 (2008), 126-130. doi: 10.1093/beheco/arm109. Google Scholar
A. Jadbabaie, J. Lin and A. Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Trans Autom Control, 48 (2003), 988-1001. doi: 10.1109/TAC.2003.812781. Google Scholar
J. Joie, Y. Lei, M.-C. Durrieu, T. Colin, C. Poignard and O. Saut, Migration and orientation of endothelial cells on micropatterned polymers: A simple model based on classical mechanics, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 1059-1076. doi: 10.3934/dcdsb.2015.20.1059. Google Scholar
H. K. Khalil, Nonlinear Systems. Third Edition, Prentice Hall, 2002. Google Scholar
V. Lakshmikantham and M. R. M. Rama, Theory of Integro-Differential Equations, vol. 1 of Stability and Control: Theory, Methods and Applications, Gordon and Breach Science Publishers, 1995. Google Scholar
C. Lubich, On the stability of linear multistep methods for Volterra convolution equations, IMA J. Numer. Anal., 3 (1983), 439-465. doi: 10.1093/imanum/3.4.439. Google Scholar
E. Méhes and T. Vicsek, Collective motion of cells: from experiments to models, Integr Biol, 6 (2014), 831-854. Google Scholar
M. Menci and M. Papi, Global solutions for a path-dependent hybrid system of differential equations under parabolic signal, Nonlinear Analysis, 184 (2019), 172-192. doi: 10.1016/j.na.2019.01.034. Google Scholar
M. Moussaïd, D. Helbing and G. Theraulaz, How simple rules determine pedestrian behavior and crowd disasters, Proceeding of the National Academy of Sciences of the United States of America, 108 (2011), 6884-6888. Google Scholar
J. D. Murray, Mathematical Biology II: Spatial Models and Biomedical Applications. Third edition, Springer, 2003. Google Scholar
G. Naldi, L. Pareschi and G. Toscani (eds.), Mathematical Modeling of Collective Behavior in Socio-Economic and Life-Sciences, Modeling and Simulation in Science, Engineering, and Technology, Birkhäuser Boston, 2010. doi: 10.1007/978-0-8176-4946-3. Google Scholar
M. Onitsuka, Uniform asymptotic stability for damped linear oscillators with variable parameters, Applied Mathematics and Computation, 218 (2011), 1436-1442. doi: 10.1016/j.amc.2011.06.025. Google Scholar
[53] L. Pareschi and G. Toscani, Interacting Multiagent Systems: Kinetic Equations and Monte Carlo Methods, Oxford University Press, 2014. Google Scholar
B. Perthame, Transport Equations in Biology, Birkhäuser, 2007. Google Scholar
B. Piccoli and A. Tosin, Pedestrian flows in bounded domains with obstacles, Contin. Mech. Thermodyn., 21 (2009), 85-107. doi: 10.1007/s00161-009-0100-x. Google Scholar
T. Pitcher, A. Magurran and I. Winfield, Fish in larger shoals find food faster, Behav Ecol and Sociobiology, 10 (1982), 149-151. doi: 10.1007/BF00300175. Google Scholar
N. Sepúlveda, L. Petitjean, O. Cochet, E. Grasland-Mongrain, P. Silberzan and V. Hakim, Collective cell motion in an epithelial sheet can be quantitatively described by a stochastic interacting particle model, PLOS Computational Biology, 9 (2013), e1002944, 12 pp. doi: 10.1371/journal.pcbi.1002944. Google Scholar
D. Strömbom, Collective motion from local attraction, J Theor Biol, 283 (2011), 145-151. doi: 10.1016/j.jtbi.2011.05.019. Google Scholar
B. Szabò, G. J. Szöllösi, B. Gönci, Z. Jurànyi, D. Selmeczi and T. Vicsek, Phase transition in the collective migration of tissue cells: Experiment and model, Phys Rev E, 74. Google Scholar
J. Tsitsiklis, Problems in Decentralized Decision Making and Computation. Ph.D. Dissertation, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 1984. Google Scholar
T. Vicsek, A. Cziròk, E. Ben-Jacob, I. Cohen and O. Shochet, Novel Type of Phase Transition in a System of Self-Driven Particles, Phys Rev Lett, 75 (1995), 1226-1229. doi: 10.1103/PhysRevLett.75.1226. Google Scholar
T. Vicsek and A. Zafeiris, Collective motion, Physics Reports, 517 (2012), 71-140. doi: 10.1016/j.physrep.2012.03.004. Google Scholar
A.-M. Wazwaz, Linear and Nonlinear Integral Equations. Methods and Applications, Springer, 2011. doi: 10.1007/978-3-642-21449-3. Google Scholar
Figure 1. $ R_A: $ vanishing region for $ \tilde a $ and $ \tilde b $ defined in (80), bounded by the curves $ \tilde y = \sqrt{\tilde x^2+2\frac{\alpha}{p}}, $ $ \tilde y = \frac{\alpha}{2p\tilde x} $ and $ \tilde y = \tilde x $
Figure 2. Numerical test. Simulation with parameters $ \sigma = 0.5 $, $ \beta = 5 $, $ \gamma = 2\times 10^2 $, $ D = 2\times 10^2 $, $ \xi = 0.5 $, $ V_{0, \max} = 3 $, and $ \mathbf{X}_{0} $ randomly taken in the red square shown in the top panel (Section 6.2)
Figure 3. Numerical test. Functions $ Fl_{X}(t) $, $ Fl_{V}(t) $ and $ \left\|\mathbf{V}_{\text{CM}}(t)\right\| $ versus time (x-axis shows only a part of the time domain), as defined in Section 6.2
Yu-Jhe Huang, Zhong-Fu Huang, Jonq Juang, Yu-Hao Liang. Flocking of non-identical Cucker-Smale models on general coupling network. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1111-1127. doi: 10.3934/dcdsb.2020155
Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159
Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017
Pan Zheng. Asymptotic stability in a chemotaxis-competition system with indirect signal production. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1207-1223. doi: 10.3934/dcds.2020315
Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310
Yichen Zhang, Meiqiang Feng. A coupled $ p $-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075
Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118
John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044
Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450
Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316
Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042
Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021011
Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002
Ezio Di Costanzo Marta Menci Eleonora Messina Roberto Natalini Antonia Vecchio | CommonCrawl |
Evaluate $\displaystyle \int{\dfrac{2x+3}{x^2+4x+5}\,}dx$
Indefinite integrals
Algebraic functions
The linear expression in one variable $2x+3$ is divided by a quadratic expression $x^2+4x+5$ to express a quantity in rational form.
$\displaystyle \int{\dfrac{2x+3}{x^2+4x+5}\,}dx$
The indefinite integral of this rational function has to calculate with respect to $x$ in this problem. Now, let's learn how to find the integral of the given rational function in calculus.
Prepare the function for integration
The numerator $2x+3$ is a linear expression in one variable and the denominator $x^2+4x+5$ is a quadratic expression. The numerator should be adjusted as the derivative of the denominator in order to find the integral of this type of rational functions.
The derivative of quadratic expression is $2x+4$ but the numerator $2x+3$ is slightly different to the derivative. Hence, add $1$ to the expression in the numerator and subtract same quantity from their sum.
$=\,\,\,$ $\displaystyle \int{\dfrac{2x+3+1-1}{x^2+4x+5}\,}dx$
$=\,\,\,$ $\displaystyle \int{\dfrac{2x+4-1}{x^2+4x+5}\,}dx$
The rational function can be split as the difference of two rational functions as follows.
$=\,\,\,$ $\displaystyle \int{\bigg(\dfrac{2x+4}{x^2+4x+5}-\dfrac{1}{x^2+4x+5}\bigg)\,}dx$
According to the difference rule of integration, the integral of difference of functions is equal to the difference of their integrals.
$=\,\,\,$ $\displaystyle \int{\dfrac{2x+4}{x^2+4x+5}\,}dx$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
Calculate the integral of first term
In this step, let's focus on finding the integral of the rational function in the first term of the expression.
$\displaystyle \int{\dfrac{2x+4}{x^2+4x+5}\,}dx$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
$=\,\,\,$ $\displaystyle \int{\dfrac{(2x+4) \times dx}{x^2+4x+5}\,}$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
Suppose $u \,=\, x^2+4x+5$ and differentiate both sides with respect to $x$.
$\implies$ $\dfrac{d}{dx}{(u)}$ $\,=\,$ $\dfrac{d}{dx}{(x^2+4x+5)}$
As per the addition rule of derivatives, the derivative of sum of the terms is equal to sum of their derivatives.
$\implies$ $\dfrac{du}{dx}$ $\,=\,$ $\dfrac{d}{dx}{(x^2)}$ $+$ $\dfrac{d}{dx}{(4x)}$ $+$ $\dfrac{d}{dx}{(5)}$
Find the derivative of $x$ square as per the power rule of derivatives and also find the derivative of number $5$ as per the derivative rule of a constant.
$\implies$ $\dfrac{du}{dx}$ $\,=\,$ $2x^{2-1}$ $+$ $\dfrac{d}{dx}{(4 \times x)}$ $+$ $0$
Use the constant multiple rule of derivatives to separate the constant factor $4$ from the differentiation.
$\implies$ $\dfrac{du}{dx}$ $\,=\,$ $2x^{1}$ $+$ $4 \times \dfrac{d}{dx}{(x)}$
Finally, find the derivative of $x$ with respect to $x$ as per the derivative rule of a variable.
$\implies$ $\dfrac{du}{dx}$ $\,=\,$ $2x$ $+$ $4 \times 1$
$\implies$ $\dfrac{du}{dx}$ $\,=\,$ $2x+4$
$\,\,\,\therefore\,\,\,\,\,\,$ $du$ $\,=\,$ $(2x+4) \times dx$
We have assumed that $u \,=\, x^2+4x+5$ and also derived that $du$ $\,=\,$ $(2x+4) \times dx$. So, substitute them in the first term of the expression to convert the expression in the first term in terms of $u$.
$\implies$ $\displaystyle \int{\dfrac{(2x+4) \times dx}{x^2+4x+5}\,}$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$ $\,=\,$ $\displaystyle \int{\dfrac{du}{u}\,}$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
$=\,\,\,$ $\displaystyle \int{\dfrac{1 \times du}{u}\,}$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
$=\,\,\,$ $\displaystyle \int{\dfrac{1}{u}\, \times du}$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
$=\,\,\,$ $\displaystyle \int{\dfrac{1}{u}\, du}$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
Use the reciprocal rule of integrals to find the integral of the multiplicative inverse of $u$ with respect to $u$.
$=\,\,\,$ $\log_{e}{|u|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
Now, substitute the value of $u$ to get the first term in terms of $x$.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{x^2+4x+5}\,}dx$
Find the integral by completing the square
It is time to calculate the indefinite integral of the remaining rational function. The rational function is a reciprocal of quadratic expression. It can be evaluated by completing the square method.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{x^2+2 \times 2 \times x+5}\,}dx$
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{x^2+2 \times 2 \times x+4+1}\,}dx$
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{x^2+2 \times 2 \times x+2^2+1}\,}dx$
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{(x+2)^2+1}\,}dx$
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{1+(x+2)^2}\,}dx$
Suppose $y \,=\, x+2$. Now, differentiate the equation with respect to $x$.
$\implies$ $\dfrac{d}{dx}{(y)}$ $\,=\,$ $\dfrac{d}{dx}{(x+2)}$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $\dfrac{d}{dx}{(x)}$ $+$ $\dfrac{d}{dx}{(2)}$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $1$ $+$ $0$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $1$
$\implies$ $dy$ $\,=\,$ $1 \times dx$
$\,\,\,\therefore\,\,\,\,\,\,$ $dx$ $\,=\,$ $dy$
In this step, we took $y \,=\, x+2$ and derived that $dx \,=\, dy$. Now, transform the integral function in terms of $y$ by substituting these values.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\displaystyle \int{\dfrac{1}{1+y^2}\,}dy$
According to the reciprocal integral rule in one plus square, the integral of the reciprocal of $1$ plus square of $y$ with respect to $y$ is equal to inverse tangent of $y$.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\big(\arctan{(y)}+c_2\big)$
It can also be written in the following from as per the inverse trigonometry.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\big(\tan^{-1}{(y)}+c_2\big)$
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}+c_1$ $-$ $\tan^{-1}{(y)}$ $-$ $c_2$
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}$ $-$ $\tan^{-1}{(y)}$ $+$ $c_1$ $-$ $c_2$
Now, substitute the value of $y$ to get the integral of the given rational function in terms of $x$.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}$ $-$ $\tan^{-1}{(x+2)}$ $+$ $c_1$ $-$ $c_2$
The difference of the constants can be easily denoted by a constant $c$.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}$ $-$ $\tan^{-1}{(x+2)}$ $+$ $c$
It is also written in another form in integral calculus as follows.
$=\,\,\,$ $\log_{e}{\big|x^2+4x+5\big|}$ $-$ $\arctan{(x+2)}$ $+$ $c$ | CommonCrawl |
Tdoa Hyperbola Equation
The determination of the location of the site involves the solution of at least two simultaneous second order equations for the intersection of two hyperbolas which represent T 2 - T 1 = Constant #1 and T 3 - T 2 = Constant #2. hyperbolic partial differential equation BETA. Newest Active Followers. Always use the upper case for the first character in the element name and the lower case for the second character. Mole Relations in Balanced Equations. An exact solution to the hyperbolic TDOA equations, when the number of TDOAs is equal to the number of coordinates of source: Clear and simple solutions (Chan and Ho, 1994 Chan and Ho (1994) Chan, Y. TDOA locates the target at intersections of hyperbolas or hyperboloids that are generated with foci at each fixed station of a pair. Die Parameter der Ellipsengleichung lassen sich fehlertolerant durch Regression bestimmen, woraus sich Abstände und Winkel zwischen den Empfängern ableiten lassen. Brian McLogan 83,974 views. 5 Tutorials that teach Converting the Equation of a Hyperbola. Lund University Publications. v is a nuisance parameter that describes how much the allele frequency increases per unit distance from the origin. plot(x,y,'co') # same function with cyan dots pylab. Hereafter, these subvolumes are grouped differently, such that whose associated TDOA bounds are enclosed by a specific delay interval, are clustered together. 22) Center at (−1, −1). TDoA 3D position algorithm using ultrasound (intersection of hyperbolas) 6 days left VERIFIED the description is pretty simple: Given Time Difference of Arrival from 4 nodes, I need a function written in C that gets these times and returns position in 3D coordinate (x,y,z). I'm a beginner at Matlab, so I don't have much experience. hyperbola equation, which includes the undefined axis coordinate in the 2D hyperbola equation. 2 Answered Questions for the topic Hyperbola Equation. The only factor omitted by the Harris. [2] It is one of the global navigation satellite systems (GNSS) that provides geolocation and time informati. The General Equation of the Hyperbola. Future value (FV) is the value of a current asset at a future date based on an assumed rate of growth over time. For each cell triplet, hyperbola calculation component 810 is configured to determine differences of distances between the UE and base station pairs included in the triplet based on TDOA signal information included in the RRC report, information identifying locations of the base stations, and the calibrated clock differences. Source code to solve quadratic equation in Python programming with output and explanation This program computes roots of a quadratic equation when coefficients a, b and c are known. [7-9], with analyzed solutions concerning uniqueness and existence under. Thanks to Paul Debue for. I am looking for the integer solutions for the diophantine equation $y^2 =4n(n-1)(n-2)\cdots (n-k+1)+1$ for a given $k$ where $n>k+1>5$. The BMR formula uses the variables of height, weight, age and gender to calculate the Basal Metabolic Rate (BMR). As discussed in blog:Time - Difference - of - Arrival ( TDOA ) Estimation Preliminary introduction, geometrically, each noise - free TDOA defines a hyperbola on which the source must lie in the 2 - D space, and the target location is given by the intersection of at least two hyperbolae. Office has equations that you can readily insert into your documents. The difficulty is due t o the fact that the equations of (1) are highly nonlinear. ference of Arrival (TDOA) [2], [3] attracted the research attention. c i is ith center of cluster, c avg is average from c 1 to c k. 2 Answered Questions for the topic Hyperbola Equation. TDOA localization is called hyperbolic positioning as il-lustrated in Fig. existing methods [3] as a starting point, and leave improvements to time synchronization functionality as near-term future work. Arithmetic & Geometric Sequences Calculator. However, you can specify its marking a variable, if write, for example, y(t) in the equation, the calculator will automatically recognize that y. 2020 Leave a Comment on Emitter Detection and Geolocation for Electronic Warfare (The Artech House Electronic Warfare Library). hyperbola is generated with each TDOA measurement and the position of MS is estimated by the intersection of these hyperbolas. Answered Questions All Questions Unanswered Questions. Fletcher et al. This article is about the term used in rhetoric. Solve a differential equation analytically by using the dsolve function, with or without initial Nonlinear Differential Equation with Initial Condition. The common use of cheap single Omni directional antenna in most of the ZigBee deployments also ruled out the possibility of using techniques that rely on packet Angle of Arrival (AoA) for estimating the location. From the equation (x/a) 2 - (y/b) 2 = 1, first plot y=sqrt((x/a) 2 -1) and then y=-sqrt((x/a) 2 -1) with a 'hold on' between them. Figure 1 Contours of constant TDOA. shows the worst case situation that would produce maximum deviation in both the range and bearing/azimuth of the phantom target. After some simplifications, this equation can be rewritten in a more compact form as x 2 a y b = x d2=4 y D2=4 d2=4 =1: (5) The solution to this equation has asymptotes along the lines y= b a x= s D2=4 d2=4 d2=4 x: (6) which defines the angle of arrival for far-away transmitters. A GPS receiver monitors multiple satellites and solves equations to determine the precise position of the receiver and its deviation from true time. Other methods, like the angle of arrival, will not be addressed in this paper. Why must chemical equations be balanced? A balanced chemical equation accurately describes the quantities of reactants and products in chemical reactions. TDOA classical configuration : three receivers and one emitter. How to simplify / minify a boolean expression? The simplification of Boolean Equations can use different methods: besides the classical development via associativity, commutativity, distributivity, etc. TDOA h k(x) = r k= kx p kk+ r 0 DOA h The solution to this hyperbolic equation has asymptotes along the lines x 2 = b a x 1 = s D2=4 2r 12 =4 r2 12 =4 x 1 = x 1 D. • Differential approach: Paraxial Wave equation • Integral approach: Huygens' integral • Gaussian Spherical Waves • Higher-Order Gaussian Modes. continuity equation derivation states that:"The product of cross sectional area of the pipe and the fluid speed at any point along the pipe is constant. This is the same indifference curve map as before, only that the levels of the indifference curves are the squared of the previous levels. Right now I'm trying to plot a hyperbola that I'm using for Time Difference of Arrival(TDoA), but I've been lost for hours now, and I still can't figure out how to plot it. Assuming that the three pairs are constructed from three receptors the resulting system of equations is:. The TDOA method does not require knowledge of emission times. Keep track of your score and try to do better each time you play. To estimate the velocity, we see that (2) is linear in v. The TDOA algorithm is guarenteed to. Lund University Publications. 2 The hyperbola described by a TDOA measurement. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. plot(x,y) # sin(x)/x pylab. Knowing the time difference of arrival between the emitter and two sensors geolocalizes emitter to the points of a hyperbola. In this lesson you will learn how to write equations of hyperbolas and Definition: A hyperbola is all points found by keeping the difference of the distances from two points. Why must chemical equations be balanced? A balanced chemical equation accurately describes the quantities of reactants and products in chemical reactions. Thread starter Igo. Vogel's IPR is based on computer simulations to several solution gas drive reservoirs for different fluid and reservoir relative permeability properties. Definitions. The equation of a line is typically written as y=mx+b where m is the slope and b is the y-intercept. Transverse axis is vertical and 24 units long Congujate axis is 8 units long. Solid Analytic Geometry. When done, select the equation, and copy the LaTeX code to another app. For more on this see The slope m of a line is one of the elements in the equation of a line when written in the "slope and. How to write an ionic equation from a word equation? When writing an ionic equation, state symbols of the substances must be clearly indicated. 5( / )) ( , ) 2 2 r r r f r To solve for a particular radius, R such that the probability that r is less than R equals 0. Engineering - Double Interpolator Formula. The 2-dimensional special case is obtained by setting the sensor and emitter altitudes equal to zero in the equations. Just as with ellipses, writing the equation for a hyperbola in standard form allows us to calculate the key features: its center, vertices, co-vertices, foci, asymptotes, and the lengths and positions of the transverse and conjugate axes. How to simplify / minify a boolean expression? The simplification of Boolean Equations can use different methods: besides the classical development via associativity, commutativity, distributivity, etc. From before, the general equation for the 2-dimensional case is c(ti −t) = p (xi −x)2 +(yi −y)2, (10) where we now recognize that v = c, the speed of light in air. Определение. subtracting m1 times the rst. MATLAB Code. The equation: $\dfrac {x^2} {a^2} + \dfrac {y^2} {a^2 - c^2} = 1$. The time-difference-of-arrival (TDOA) between the signals from each antenna are then used within multi-lateration equations to determine the position of the source. Just enter your equation and it gets solved. Using nonlinear regression, this equation can be converted to the form of a hyperbola [2]. There are two types of hyperbolas: one hyperbola's conjugate axis is $$X$$-axis and the other's conjugate axis is $$Y$$-axis. Remember, x and y are variables, while a and b are. It is entirely time based - the times being the TDOA's (Time Differences of Arrival) from which we create isochronic curves (hyperbolas) and exploit their properties. Office has equations that you can readily insert into your documents. Density calculation has been changed: equation 25 in Cheng's paper to compute the density of the mixture should use the glycerine fraction by VOLUME and not by mass. 4 Hyperbolas. For more on this see The slope m of a line is one of the elements in the equation of a line when written in the "slope and. It is desired to estimate D,the time di erence of arrival (TDOA) of s(t) between the two receivers. See full list on courses. ABSTRACT This paper investigates aspects of the geometrical arrangement of the receivers in a ground‐referenced aircraft geometric height measurement system. 2-D TDoA Example Figure 4. For the finite potential well, the solution to the Schrodinger equation gives a wavefunction with an exponentially decaying penetration into the classicallly forbidden region. Time-saving lesson video on Hyperbolas with clear explanations and tons of step-by-step A hyperbola has two axes of symmetry, called the transverse axis and the conjugate axis, which. Independent Study Presentation Positioning Techniques in 3G Networks Pushpika Wijesinghe Supervisor: Prof (Mrs. UT, then the time-of-difference-of-arrival (TDoA) method can be used. What are synonyms for hyperbola?. A line from the MS to the BS can be drawn with each AOA measurement and the position of MS is calculated from the intersection of at least two lines. TDoA hyperbola Fig. Online equations solver. Hanebeck}, booktitle = {Proceedings. SCENIC is an EC-funded project aimed at developing a harmonized corpus of methodologies for environment-aware acoustic sensing and rendering. The difficulty is due t o the fact that the equations of (1) are highly nonlinear. 1 Time Difference of Arrival (TDOA) The time difference of arrival (TDOA) method for emitter localization is based on measuring the. Tangents to the circles at M and N intersect the x-axis at R and S. 84 x (P/F) [p < 0. The Harris Benedict Equation is a formula that uses your BMR and then applies an activity factor to determine your total daily energy expenditure (calories). Example Target and Beacon locations In this example, we have the same setup of a target. 2, each TDOA measurement determines a hyperbola between two BSs, and two of these hyperbolas determine an intersection, which is a candidate for the MS location to be estimated. TDOA is less computationally intensive. Hyperbola, two-branched open curve, a conic section, produced by the intersection of a circular cone and a plane that cuts both nappes (see cone) of the cone. An algebraic equation of the fourth degree. The measurement is defined as the relative timing difference. If we consider the flow for a short interval of time Δt. The asymptotes are the straight lines:. This method is known as "Hyperbolic Time Difference of Arrival (TDOA) Tri-lateration. Thanks to Paul Debue for. The system could be easily extended for the three-dimensional case. Then, we propose an interaction algorithm that mutually supplies the undefined axis coordinate of users among 2D TDOAs. same TDOA absolute value. Indexing and Slicing. Algorithm for solution of system of hyperbolic equations was created to determine position of signal source which is the. We build up a testbed to evaluate the performance of Whistle in. Note: We could do a more efficient solution to solve multi-variable equations but this works for the purpose. The above equation is the Pythagorean theorem at its root, where the hypotenuse d has already been solved for, and the other two sides of the triangle are determined by subtracting the two x and y. Graphing a Linear Equation Using Slope Intercept Form. sensors can localize a radar/emitter using two TDOA measurements. Free ordinary differential equations (ODE) calculator - solve ordinary differential equations (ODE) step-by-step. The distance between the foci is 2c. 1 (b) we can see that it holds "black arrow length plus orange arrow length equals blue arrow length plus the measurement ˝i 1;2. TDOA localization is called hyperbolic positioning as il-lustrated in Fig. Rio Yokota, who was a. where a,b,c and d are constants and not all a,b,c are zero, can be taken to be an equation of a plane in space. Also: One vertex is at (a, 0), and the other is at (−a, 0). c 2 = a 2 + b 2. Outline Wave Equations from ω-k Relations Schrodinger Equation The Wavefunction. They have two vertices which are the inward most points. This calculator will find either the equation of the hyperbola (standard form) from the given parameters or the center, vertices, co-vertices, foci, asymptotes, focal parameter, eccentricity, linear eccentricity, latus rectum, length of the latus rectum, directrices, (semi)major axis length, (semi)minor axis length, x-intercepts, and y-intercepts of the entered hyperbola. The fundamental cost-volume-profit relationship can be derived from profit Where profit is PR, revenue equals the product of price per unit P and sales volume in units Q, fixed. following expression is obtained: (xi x j)x +(yi y j)y +(zi zj)z + dijrj = m 2 i m 2 j dij 2 (2) The same equation can be written for other two pairs. ) Dileeka Dias Outline 3G mobile Networks 3G Standards Basic Network Architecture Positioning Parameters in 3G networks Positioning Techniques in 3G networks 3G Mobile Networks Intended to provide Global Mobility 3G standards Basic Network Architecture Core Network. The equation of a line is typically written as y=mx+b where m is the slope and b is the y-intercept. The RSTD values and the. The TDOA problem can be turned into a system of linear equations when there are three or more receivers, which can reduce the computation time. Experiment with tangible equations. ment equation. sound) from an array of receivers (ie. import pylab import numpy x = numpy. Many algorithms. Without noise, n ;n =0 , the estimate is exact. Ifthe emit-ter and the sensors lie in the same plane, one TDOA measurement defines a hyperbola aspossible emitter location. A common goal is to find an estimate of the source location that minimizes those inconsistencies. Design and implement of location engine. die darauf beruht, dass TDoA-Messungen von drei Empfängern in der Ebene eine Ellipse bilden. Regression Equations for Calculation of Z Scores of Cardiac Structures in a Large Cohort of Healthy Infants, Children, and Adolescents: An Echocardiographic Study. We'll start with a simple example: a hyperbola with the center of its origin. A hyperbola with a vertical transverse axis and center at ( h , k ) has one asymptote with equation y = k + ( x - h ) and the other with equation y = k - ( x - h ). COP 301 and 305 are the true emitter circles-of-position, and this is determined by the common intersection 309 with the TDOA hyperbola limb 303. In TDoA, multiple sensors each detect the arrival time of a particular signal. Hey guys ,I was wondering if someone could explain algebraic equations for hyperbolas? I have a major assignment to complete in a couple of weeks and for that I need a good understanding of problem solving in topics such as complex. HYPERBOLIC POSITION LOCATION SYSTEMS 36 where Ais the amplitude ratio and D= d 2 −d 1. Log InorSign Up. Plane equation given three points. Algebra-expression. 7 us (2000 m) "TX is 2000 m closer to RX1 than to RX2" Possible TX positions: hyperbola => 3 Receivers required to solve ambiguities TDOA = 0 ns RX 1 RX 2 Receiver 2 TDOA = -6. Q = ΔEint + W, although W = 0 at constant volume. biosphere E. It indicates that point A is located on the hyperbola with Bk and BN as focuses and with AB ABkN− as the real axis. TDOA = t1 −t2 = k *(R1 − R2). An exact solution to the hyperbolic TDOA equations, when the number of TDOAs is equal to the number of coordinates of source: Clear and simple solutions (Chan and Ho, 1994 Chan and Ho (1994) Chan, Y. Journal of Multivariate Analysis 173, 640-655. Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. In this algorithm, TDOA is formulated using parametric equations of the hyperbolas whose intersections are candidate locations for the nodes to be localized. Multilateration is a navigation technique that uses the Time Difference of Arrival method or TDOA to find a transmitter (ie. In some cases, it might be worth using this old API. Example Target and Beacon locations In this example, we have the same setup of a target. The intersection of the hyperbolae gives the source location estimate. Writing Equations of Hyperbolas in Standard Form. First, assign the function to $y$, then take the natural logarithm of both sides of the equation. [2] It is one of the global navigation satellite systems (GNSS) that provides geolocation and time informati. R = CEP), one must double integrate the above equation with respect to θ (0 to 2 and r (0 to R). It must have the term in x3 or it would not be cubic but any or all of b, c and d can be zero. In Section III, the theoretical equations that are the basis of the TDOA algorithm are described, and the models and algorithms that were developed are presented. Hyperbola definition, the set of points in a plane whose distances to two fixed points in the plane have a constant difference; a curve consisting of two distinct and similar branches, formed by the. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them. Without noise, n ;n =0 , the estimate is exact. Перевод слова equation, американское и британское произношение, транскрипция equation of time — астр. A four sensor system produces twelve usable isochrones from which we derive four triangles (triplets), the apexes (intersections) of which should coincide (and generally do). Graphing a Linear Equation Using Slope Intercept Form. In chapter 2 we established rules for solving Upon completing this section you should be able to solve equations involving signed numbers. at the MS receiver are capable of measuring the TDOA be-tween the signal from BS 1 and that from any other BS. Hover over the block, then use the **⋮⋮. Free ordinary differential equations (ODE) calculator - solve ordinary differential equations (ODE) step-by-step. o Fick's first law - The equation relating the flux of atoms by diffusion to the diffusion coefficient and the concentration gradient. Reduced Cartesian equation:. en, rearranging (4) and f ij f 0 +Δf ij, it can be expressedas 2cΔf ijr j+2f ci ρ T u0s j v0Δr ij 2cf ijr ij+2cf ijd ij −2f ci t i−s j T v0 −2cf ijr i−2f ij r ij+d ij ρ T u0t i v0. The image below describes these equations. The graph should be a hyperbola. Knowing the time difference of arrival between the emitter and two sensors geolocalizes emitter to the points of a hyperbola. 2 Answered Questions for the topic Hyperbola Equation. Exact Solutions > Ordinary Differential Equations > Second-Order Linear Ordinary Differential Equations. Euler's equation is one of most remarkable and mysterious discoveries in Mathematics. Since, we know that, The equation of hyperbola So, the equation of hyperbola is, Also, a = distance between center and vertices of hyperbola. Inflections of 'hyperbola' (nnoun: Refers to person, place, thing, quality, etc. Generally, a hyperbola. 5" strip of admin info at the top. Click the + that appears to the left when you hover over a new line. Students can probe dynamic math expressions like they would probe pulleys in a physics experiment. Based on the resulting modified measurement equation, standard nonlinear Kalman filters can estimate the full joint state vector of the robot and landmarks without explicitly calculating data association hypotheses. Tdoa Hyperbola Equation. From before, the general equation for the 2-dimensional case is c(ti −t) = p (xi −x)2 +(yi −y)2, (10) where we now recognize that v = c, the speed of light in air. Two hyperbolas are formed from TDOA measurements. Let's see some simple to advanced examples of the P-Value equation to understand it better. The fundamental cost-volume-profit relationship can be derived from profit Where profit is PR, revenue equals the product of price per unit P and sales volume in units Q, fixed. Time Difference of Arrival (TDoA). These online calculators find the equation of a line from 2 points. Hyperbola can be defined as the locus of point that moves such that the difference of its distances from two fixed The constant difference is the length of the transverse axis, 2a. This page plots a system of differential equations of the form dx/dt = f(x,y), dy/dt = g(x,y). import pylab import numpy x = numpy. The distance between the foci is 2c. TDOA involves receivers at multiple sites and solving an equation of hyperbolas to find a location. Note that in this formula, fixed costs are stated as a total of all. Credit : An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. The BMR formula uses the variables of height, weight, age and gender to calculate the Basal Metabolic Rate (BMR). In TDOA‐based positioning, the E‐SMLC estimates the UE's position (x, y) by solving two hyperbola simultaneous equations that are based on two RSTD values, r (1,0) and r (2,0), and the 2D positions of three eNBs, including a serving eNB (eNB 0) and two neighboring eNBs (eNB 1, eNB 2), as described in Fig. Each TDOA defines a hyperbola in which the emitter must lie. Right now I'm trying to plot a hyperbola that I'm using for Time Difference of Arrival(TDoA), but I've been lost for hours now, and I still can't figure out how to plot it. To interpolate the P value: x1, x2, x3, y1, y2, Q11, Q12, Q21 and Q22 need to be entered/copied from the table. between a subframe received from the neighboring cell j and corresponding subframe. Algebra-expression. The time differences of arrival (TDOAs) of the response signal at four receiving stations are measured and used. TDOA localization is called hyperbolic positioning as il-lustrated in Fig. Polar: Rose. Problem setting Given a network of n synchronized devices M i(i = 1;:::;n) at unknown positions in p-dimensional Euclidean space. Second-Order ODE with Initial Conditions. shows the worst case situation that would produce maximum deviation in both the range and bearing/azimuth of the phantom target. In this algorithm, TDOA is formulated using parametric equations of the hyperbolas whose intersections are candidate locations for the nodes to be localized. Regression Equations for Calculation of Z Scores of Cardiac Structures in a Large Cohort of Healthy Infants, Children, and Adolescents: An Echocardiographic Study. position of the mobile unit, if the TDOA measurements ∆tij are exact. Hyperbolas: · A hyperbola is the set of points such that the absolute value of the differences · Hyperbolas have two symmetric halves. Variables, Functions and Equations. Future value (FV) is the value of a current asset at a future date based on an assumed rate of growth over time. Figure 1 illustrates the hyperbolic function in the local co. To better understand how TDOA information is fully utilized to estimate the position of fiber segment, the geometry in Fig. odeint(func, y0, t[, args, Dfun, col_deriv, …]) Integrate a system of ordinary differential equations. Geolocation base station (obtain location parameters of mobile station directly or indirectly). Determine whether the transverse axis lies on the x- or y-axis. TDOA It is the difference in TOA used to locate the mobile. Source location can be estimated by solving a nonlinear hyperbolic equations. Hyperbola Calculator. The order of differential equation is called the order of its highest derivative. Tdoa code. In other words, the breakeven point is equal to the total fixed costs divided by the difference between the unit price and variable costs. Then a hyperbola of possible locations can be calculated from receiving one signal. 6 Lesson # 05 Standard Equation of Hyperbola. Use x as your variable. TDOA classical configuration : three receivers and one emitter. How to solve quadratic equations by factorising, solve quadratic equations by completing the square, solve. TDOA locates the target at intersections of hyperbolas or hyperboloids that are generated with foci at each fixed station of a pair. show() # show the plot. Balance any equation or reaction using this chemical equation balancer! Find out what type of To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. C Program to find sum of two numbers. Inserting equation (1) in equation (2) results in the TDOA equation system given in equation (3). of the TDOA test chip, including some of the design decisions, source to a hyperbola in two dimensions or a hyperboloid in Equation (5) can be further sim-. By using the TDoA method, a goal with anchor nodes can be asynchronous [74]. Knowing the time difference of arrival between the emitter and two sensors geolocalizes emitter to the points of a hyperbola. SCENIC is an EC-funded project aimed at developing a harmonized corpus of methodologies for environment-aware acoustic sensing and rendering. 1: Cellular radio network deployment and example for BS involved in the positioning process using TOF measurement from BS S 1 and TDOA measurements based on signals from BS S 2 and S 3. The TDOA of a signal between two receivers, multiplied by the speed of a signal, gives the distance difference. уравнение времени (разность между средним и истинным солнечным временем). The equation solver allows to solve equations with an unknown with calculation steps : linear equation, quadratic equation, logarithmic equation, differential equation. Whatequation represents the given situation?How would you write the equation to represent the situation?Do you think you can use the equation formulated to find the length andthe width of the. Hyperbola definition, the set of points in a plane whose distances to two fixed points in the plane have a constant difference; a curve consisting of two distinct and similar branches, formed by the. A four sensor system produces twelve usable isochrones from which we derive four triangles (triplets), the apexes (intersections) of which should coincide (and generally do). The sign of the measured time difference defines the branch ofthe hyperbola, on which the emitter must lie. Solve linear system of equations with multiple variables, quadratic, cubic and any other equation with one unknown. Calculate percent accuracy by dividing the difference between an observed value and the accepted one by the accepted value and multiplying by a hundred. The amount of water vapor is a function of the relative humidity; it is also related to the dew point temperature of the air. Requires time synchronization only among all. The essence of the proposed PD based navigation method is shown as Fig. An efficient closed-form solution to this non-linear estimation problem is developed by employing two-step weighted least squares minimisations. measurement determines a hyperbola with the two BTSs as foci. almasalerno. The algorithm is based on quadratic constraint total least-squares (QC-TLS) method and gives an explicit solution. at the MS receiver are capable of measuring the TDOA be-tween the signal from BS 1 and that from any other BS. Contours of constant TDOA for a given pair of receivers are shown in Fig (1). Estimate population for the year 1895, step-by-step. 05/20/20 - Given a network of receivers and transmitters, the process of determining their positions from measured pseudo-ranges is known as. In mathematics, a hyperbola (pronunciation) (adjective form hyperbolic, pronunciation) (plural hyperbolas, or hyperbolae (pronunciation)) is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. Find y(4) using newtons's forward difference formula, The population of a town in decimal census was as given below. Input MUST have the format: AX3 + BX2 + CX + D = 0. Файл:Hyperbola-hyperbolic functions. Mole Relations in Balanced Equations. Equations and Inequalities Involving Signed Numbers. 加入vip 获取下载特权vip 获取下载特权. A constant TDOA locus is a hyperbola. Just as with ellipses, writing the equation for a hyperbola in standard form allows us to calculate the key features: its center, vertices, co-vertices, foci, asymptotes, and the lengths and positions of the transverse and conjugate axes. TOA Requires knowledge of the transmit time of transmitter. The amount of water vapor is a function of the relative humidity; it is also related to the dew point temperature of the air. It provides hyperbolas on which the receiver(GBS) must be located at foci. Example 2 Graph the equation Solution Graphing Hyperbolas Functions Modeling Change: A Preparation for Calculus, 4th Edition, 2011, Connally Since h = 4 and k = 7, we have shifted the unit. Generally, a hyperbola. Tdoa code. TDOA Position Estimation We needed to develop a simulation test platform to evaluate the performance of each algorithm and to create design formulae which could be used to design a complete system. Regression Equations for Calculation of Z Scores of Cardiac Structures in a Large Cohort of Healthy Infants, Children, and Adolescents: An Echocardiographic Study. A quadric surface given by an equation of the form (x 2 / a 2) ± (y 2 / b 2) - (z 2 / c 2) = 1; in certain cases it is a hyperboloid of revolution, which can be realized by rotating the pieces of a hyperbola about an appropriate axis. 2-D TDoA Example Figure 4. Example Target and Beacon locations In this example, we have the same setup of a target. The system could be easily extended for the three-dimensional case. Let P (x, y) is th…. In mathematics, a hyperbola (pronunciation) (adjective form hyperbolic, pronunciation) (plural hyperbolas, or hyperbolae (pronunciation)) is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. The measurement is defined as the relative timing difference. Fahrenheit to Celsius (°F to °C) how to convert & conversion table. The equation: $\dfrac {x^2} {a^2} + \dfrac {y^2} {a^2 - c^2} = 1$. hyperbola - Hyperbola Equations Standard Form for a hyperbola Horizontal curve is calculated along the line where the TDOA value is constant, a hyperbola. A real number x will be called a solution or a root if it satisfies the equation, meaning. ID: 602968 Download Presentation. A hyperbola is a type of conic section that looks somewhat like a letter x. The project focusses on space-time acoustic processing solutions that do not just accommodate the environment in the modeling process but that make the environment help towards achieving the goal at hand. Quite the same Wikipedia. First, assign the function to $y$, then take the natural logarithm of both sides of the equation. hyperbolic function n. The 2-dimensional special case is obtained by setting the sensor and emitter altitudes equal to zero in the equations. After some simplifications, this equation can be rewritten in a more compact form as x 2 a y b = x d2=4 y D2=4 d2=4 =1: (5) The solution to this equation has asymptotes along the lines y= b a x= s D2=4 d2=4 d2=4 x: (6) which defines the angle of arrival for far-away transmitters. From Thinkwell's College Algebra Chapter 5 Rational Functions and Conics, Subchapter 5. American Heritage® Dictionary. Other methods, like the angle of arrival, will not be addressed in this paper. … equations. TDOA localization is called hyperbolic positioning as il-lustrated in Fig. A hyperbola is the set of points in a plane whose distances from two fixed points in the plane have a constant difference. Time Difference of Arrival (TDoA). Write down the equation of the hyperbola in its standard form. TDOA Does not require transit time. In this case, covering all possible distance-differences. known as the time difference of arrival (TDOA) PL technique, utilizes cross-correlation techniques to estimate the TDOA of a propagating signal received at two receivers. Generally, two TDOA measurements, which need at least three BTSs, can determine the transmitter's location, that is, the intersection of the two hyperbolas. 3 illustrates how two noise-free TDOA measure-ments are used to determine the position of the middle radar. Differential Equations. a hyperbola is defined by one of the following equations: vertical hyperbola: ((y-y0)^2/a^2) - ((x-x0)^2/b^2) = 1 OR. Type an equation: use the keyboard to type symbols, arrow keys to navigate. Moreover, it should be highlighted that TOA localization. Remember, x and y are variables, while a and b are. Differential equations, Partial. Conversely, an equation for a hyperbola can be found given its key features. In this section, we will learn how to find We have already seen that completing the square is a useful method to solve quadratic equations. Assuming that the three pairs are constructed from three receptors the resulting system of equations is:. As illustrated in Fig. §3 Mathematical and physical pendulums. Conversely, an equation for a hyperbola can be found given its key features. almasalerno. The 2-dimensional special case is obtained by setting the sensor and emitter altitudes equal to zero in the equations. The solution of this set is all the possible locations of the emitter. The time differences between any two sensor measurements define a hyperbola of possible origin locations (since those are the points with a constant difference in distance to each sensor). This is more like phase interferometry or pseudo doppler (the switching between antennas causing. ter and the sensors lie in the same plane, one TDOA measurement defines a hyperbola as possible emitter location. Credit : An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. The iterative Hyperbolic Least Squares (HLS) method and the non-iterative Maximum Likelihood Estimator (MLE) method are two common techniques used in the literature to solve the. An algebraic equation of the fourth degree. Solve the given system of m linear equations in n unknowns. Matrix Arithmetic. Polking of Rice. urban territory. Vogel's IPR is based on computer simulations to several solution gas drive reservoirs for different fluid and reservoir relative permeability properties. If we consider the flow for a short interval of time Δt. 84 x (P/F) [p < 0. They have two vertices which are the inward most points. When done, select the equation, and copy the LaTeX code to another app. Select Random K centers. urban territory. To estimate the velocity, we see that (2) is linear in v. y = (b/a)x; y = −(b/a)x (Note: the equation is similar to the equation of the ellipse: x 2 /a 2 + y 2 /b 2 = 1, except for a "−" instead of a "+"). If we consider the flow for a short interval of time Δt. c 2 = a 2 + b 2. Variables, Functions and Equations. Credit : An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. Type an equation: use the keyboard to type symbols, arrow keys to navigate. Use x as your variable. Transforming the above equation to polar coordinates, the joint distribution becomes 0 , 0 2 2 exp( 0. In case that one eigenvalue is equal to zero (by construction: λν,1 =/ 0, λν,2 = 0), the two hyperbola branches degenerate into a line, yielding the other case of the second substitution and the degenerated hyperbola equation becomes This case occurs if the corresponding TDoA is equal to zero. In mathematics, a hyperbola (pronunciation) (adjective form hyperbolic, pronunciation) (plural hyperbolas, or hyperbolae (pronunciation)) is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. Without much of a theoretical discussion, we will state that the general equation of the hyperbola with foci on the x-axis is \[\large \displaystyle \frac{x^2}{a^2} - \frac{y^2}{b^2} = 1 \]. A hyperbola is a type of conic section that looks somewhat like a letter x. range measurements via TDoA to localize sensors and track the target by iteratively accumulating measurements using a Bayesian filter. Thanks to Paul Debue for. Integration. Log InorSign Up. You can also write the. (8) Intuitively, we know that (8) is the representation of a hy-perbola whose two focal points are pa1 =(xa1,0) and pa2 = (−xa1,0). When focal length and circle of confusion have units of millimeters, the calculated hyperfocal distance will have units of millimeters. Hyperbola from receivers 1 and 3 r 2 r 1 r 3 Hyperbola from receivers 1 and 2 Figure 1: Location computation by TDOA The rest of the paper is organized as follows. They have two vertices which are the inward most points. This method is known as "Hyperbolic Time Difference of Arrival (TDOA) Tri-lateration. The standard equation for a hyperbola with a vertical transverse axis is - = 1. In contrast to the TOA, all possible locations for one measurement are located on a hyperbola. This method can be. Learn how to compare equations - using greater than, less than, equal to - with this fun arcade game. import pylab import numpy x = numpy. This is the same indifference curve map as before, only that the levels of the indifference curves are the squared of the previous levels. (2020) Joint Image and Depth Estimation With Mask-Based Lensless Cameras. y = mx + b. The most common first attempt is with some constants (eg. Of or relating to the fourth degree. During WWII, TDOA systems like DECCA became very popular. An algebraic equation of the fourth degree. Lund University Publications. For a much more sophisticated phase plane plotter, see the MATLAB plotter written by John C. I do apologise if I'm saying something absolutely stupid but is there a formula for finding integer solutions of a hyperbola? WolframAlpha can do. Hence strict time synchronization between MS and the GBSs is required. Use the keypad given to enter equations. A k-d Tree Based Solution Cache for the Non-linear Equation of Circuit Simulations Martin Holters (Helmut Schmidt University, Germany); Udo Zölzer (Helmut-Schmidt-University Hamburg, Germany) Wave Digital Filter Modeling of Circuits with Operational Amplifiers. If the given coordinates of the vertices and foci have the form [latex]\left(\pm a,0\right)[/latex] and [latex]\left(\pm c,0\right)[/latex], respectively, then the transverse axis is the x. Definition of ellipse Elements of ellipse Properties of ellipse Equations of ellipse Inscribed circle (incircle) of ellipse Exscribed circle (excircle). odeint(func, y0, t[, args, Dfun, col_deriv, …]) Integrate a system of ordinary differential equations. Various algorithms exist, e. This section looks at Quadratic Equations. When there are N ( 3) BSs available for the MS loca-tion, we have a set of nonlinear location equations. The RSTD values and the. The center is at (h, k). TDoA 3D position algorithm using ultrasound (intersection of hyperbolas) 5 days left the description is pretty simple: Given Time Difference of Arrival from 4 nodes, I need a function written in C that gets these times and returns position in 3D coordinate (x,y,z). TDoA hyperbola Fig. In TDoA, multiple sensors each detect the arrival time of a particular signal. tdoa positioning theory The TDOA positioning is a method using the measured time tag of signals reaching every monitoring station to get the location. The essence of the proposed PD based navigation method is shown as Fig. indicates a hyperbola, and Eq. The situation is more complex when sensors are distributed arbitrarily. TOA- and TDOA-based location systems may be unilateral or multilateral. A cubic equation has the form ax3 + bx2 + cx + d = 0. hyperbola is generated with each TDOA measurement and the position of MS is estimated by the intersection of these hyperbolas. The center of the hyperbola is (3, 5). Hey guys ,I was wondering if someone could explain algebraic equations for hyperbolas? I have a major assignment to complete in a couple of weeks and for that I need a good understanding of problem solving in topics such as complex. If the \(y\) term has the minus sign then the hyperbola will open left and right. based on a geo-location method called Time Delay Of Arrival (TDOA). Arithmetic & Geometric Sequences Calculator. Each TDOA measurement equation corresponds to one hyperbola/hyperboloid in 2-dimensional (2D) plane or the TDOA measurement noise and the sensor position errors. As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. Just enter your equation and it gets solved. Experiment with tangible equations. plot(x,2*y,x,3*y) # 2*sin(x)/x and 3*sin(x)/x pylab. Times New Roman Arial Arial Narrow Times IEEE-P802_15 Microsoft Equation 3. The center, focus, and vertex all lie on the horizontal line y = 3 (that is, they're side by side on a line paralleling the x-axis), so the branches must be side by side, and the x part of the equation must be added. Definition of ellipse Elements of ellipse Properties of ellipse Equations of ellipse Inscribed circle (incircle) of ellipse Exscribed circle (excircle). Quadratic Equations GCSE Maths revision. Several algorithms are proposed to solve the hyperbolic equations. From before, the general equation for the 2-dimensional case is c(ti −t) = p (xi −x)2 +(yi −y)2, (10) where we now recognize that v = c, the speed of light in air. HYPERBOLIC POSITION LOCATION SYSTEMS 36 where Ais the amplitude ratio and D= d 2 −d 1. The locus of points, which are at a fixed distance difference between two receivers is a hyperbola. Before learning how to graph a hyperbola from its equation, get familiar with the vocabulary words and diagrams below. (2020) Joint Image and Depth Estimation With Mask-Based Lensless Cameras. Hyperbolas: · A hyperbola is the set of points such that the absolute value of the differences · Hyperbolas have two symmetric halves. Tdoa Matlab - fcah. The Global Positioning System (GPS) is a space-based satellite navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. Thanks to Paul Debue for. Hyperbola definition, the set of points in a plane whose distances to two fixed points in the plane have a constant difference; a curve consisting of two distinct and similar branches, formed by the. RECTANGULAR HYPERBOLA. Comparing our two equations. For a much more sophisticated phase plane plotter, see the MATLAB plotter written by John C. The linear equation written in the form. TDOA Solution / Equations: / Givens: / Unknowns: / Observation: Each pair of measurements expresses a hyperbola. lumenlearning. From Wikipedia, the free encyclopedia. Of or relating to the fourth degree. Given this information, it should not be surprising that personal consumption expenditures, which make up roughly 70% of the economic equation, must be supported by surging debt levels to offset the lack. Example 2 Graph the equation Solution Graphing Hyperbolas Functions Modeling Change: A Preparation for Calculus, 4th Edition, 2011, Connally Since h = 4 and k = 7, we have shifted the unit. (2019) Robust estimation of generalized estimating equations with finite mixture correlation matrices and missing covariates at random for longitudinal data. Because of measurement noise and NLOS errors, the solution to the over-determined equations is not unique. Differential Equations. General Equation. The Equation Solver solves your equation in one click. For the finite potential well, the solution to the Schrodinger equation gives a wavefunction with an exponentially decaying penetration into the classicallly forbidden region. The basic way of solving a set of equations is by eliminating variables. The graph of a hyperbola has two disconnected parts called the branches. Here are a few examples of what you can enter. In this lesson you will learn how to write equations of hyperbolas and Definition: A hyperbola is all points found by keeping the difference of the distances from two points. Mole Relations in Balanced Equations. Knowing the time difference of arrival between the emitter and two sensors geolocalizes emitter to the points of a hyperbola. Using nonlinear regression, this equation can be converted to the form of a hyperbola [2]. The TDOA of a signal between two receivers, multiplied by the speed of a signal, gives the distance difference. Algorithms were developed to calculate the difference between WAM and ADS-B routes, to load all the information needed to build the. known as the time difference of arrival (TDOA) PL technique, utilizes cross-correlation techniques to estimate the TDOA of a propagating signal received at two receivers. The Law of Conservation of Mass states. Just as with ellipses, writing the equation for a hyperbola in standard form allows us to calculate the key features: its center, vertices, co-vertices, foci, asymptotes, and the lengths and positions of the transverse and conjugate axes. Input MUST have the format: AX3 + BX2 + CX + D = 0. i,j = c·TDOA ij = d i −d j (1) d i = p (x i −x)2 +( y i −y)2 (2) Equation (2) calculates the distances between the source and the sensors for the two-dimensional case. Let's see some simple to advanced examples of the P-Value equation to understand it better. Equation of a straight line. The rational points can also be understood by writing the equation for the hyperbola as (x − y)(x + y) = 1. The only factor omitted by the Harris. 22) Center at (−1, −1). 00206025 seconds, A=0. Systems of linear equations and matrices. Stein's result, similar to Hahn and Tretter's [2], is that the MLE for the TDOA between two sensors is the differential delay that maximizes the cross-correlation of the signals received at the two sensors. legavalcenotaro. Then click "Calculate". defines the set of all confocal hyperbolas whose foci are at $\tuple {\pm \sqrt {a^2 + b^2}, 0}$. Use x as your variable. The TDOA algorithm is guarenteed to. Without much of a theoretical discussion, we will state that the general equation of the hyperbola with foci on the x-axis is \[\large \displaystyle \frac{x^2}{a^2} - \frac{y^2}{b^2} = 1 \]. Algorithm for solution of system of hyperbolic equations was created to determine position of signal source which is the. ): hyperbolas. Any of a set of six functions related, for a real or complex variable x, to the hyperbola in a manner analogous to the relationship of the trigonometric. Outline Wave Equations from ω-k Relations Schrodinger Equation The Wavefunction. Example Target and Beacon locations In this example, we have the same setup of a target. Determine whether the transverse axis lies on the x- or y-axis. In this example the TDOA is measured between platforms 307 and 308, but other observers could be used. The theory and equations are valid for the general 3-dimensional case. Given parameter α, we can calculate kr1k and then the emitter location as e(α) = s1 −kr1(α)k cos(α TDOA (Time Difference Of Arrival), also known as multilateration, is a well-established technique for the geolocation of RF emitters. Stein's result, similar to Hahn and Tretter's [2], is that the MLE for the TDOA between two sensors is the differential delay that maximizes the cross-correlation of the signals received at the two sensors. This paper. Polar: Rose. In TDOA approaches, the difference in TOA determined by two different BTS or LMUs determines a hyperbola. i,j = c·TDOA ij = d i −d j (1) d i = p (x i −x)2 +( y i −y)2 (2) Equation (2) calculates the distances between the source and the sensors for the two-dimensional case. Hyperbola principle: The measurement take between a pair of eNB's is defined as Reference Signal Time. Uplink-Time Difference of Arrival (U-TDOA) and Time of Arrival (TOA). hyperbola is generated with each TDOA measurement and the position of MS is estimated by the intersection of these hyperbolas. Note that the only difference in the asymptote equations above is in the slopes of the straight lines: If a 2 is the denominator for the x part of the hyperbola's equation, then a is still in the denominator in the slope of the asymptotes' equations; if a 2 goes with the y part of the hyperbola's equation, then a goes in the numerator of the slope in the asymptotes' equations. As illustratedin Fig. , before closed-form solutions were found. Air density equations. We need to graph the hyperbola with equation …… The above equation is in the form , which is the equation of a hyperbola. From the equation (x/a) 2 - (y/b) 2 = 1, first plot y=sqrt((x/a) 2 -1) and then y=-sqrt((x/a) 2 -1) with a 'hold on' between them. TDOA measured from signals acquired by the microphone pair constrains the source to lie on a hyperbola whose foci are and we obtain the update equation of the. The rational points can also be understood by writing the equation for the hyperbola as (x − y)(x + y) = 1. In the frame turned by an eight of a turn General polar equation of a rectangular hyperbola passing by O: (complex parametrization. tdoa positioning theory The TDOA positioning is a method using the measured time tag of signals reaching every monitoring station to get the location. The BMR formula uses the variables of height, weight, age and gender to calculate the Basal Metabolic Rate (BMR). A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). Transforming the above equation to polar coordinates, the joint distribution becomes 0 , 0 2 2 exp( 0. By using the TDoA method, a goal with anchor nodes can be asynchronous [74]. At least two hyperbolas (Figure2, solid line) formed using two TDOAs computed. Please submit the PDF file of. Conversely, an equation for a hyperbola can be found given its key features. This equation provides a convenient way of determining the formula weight of a gas if mass, temperature, volume and pressure of the gas are known (or can be determined). Basic Methods For Solving Functional Equations. We'll start with a simple example: a hyperbola with the center of its origin. Localization is one of the main issues in a network of wireless sensors. Euler's equation is one of most remarkable and mysterious discoveries in Mathematics. T 2 Merritt dieciallitif Roos all servicia de Ile, no inWrem Saineralest Y Perimanantem do Is art. This thesis introduces a modified TDOA algorithm, TDOA delay table search (TDOADTS), that has more stable performance than the or- iginal TDOA, and requires only 4 % of the SRP computation lo ad for a 3 -dimensional space of a typical room. Because of measurement noise and NLOS errors, the solution to the over-determined equations is not unique. To find the foci, solve for c with c 2 = a 2 + b 2 = 49 + 576 = 625. By placing a hyperbola on an x-y graph (centered over the x-axis and y-axis), the equation of the curve is:. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based. Tdoa code. Hi, Trying to convert equation 4x^2-y^2 + 8x + 4y + 16 = 0 to standard form (x^2/a^2 - y^2/b^2 = 1) Here is what I have done: 4x^2-y^2 + 8x + 4y + 16 = 0. For this signal model assumption Stein [6] has derived the maximum likelihood estimator (MLE) for the TDOA between two sensors. When examining the equation for each of the percent solutions above, it is very important to note that in all cases the denominator refers to the solution mass or volume and not just the solvent mass or. community B. 2 The hyperbola described by a TDOA measurement. where: $\tuple {x, y}$ denotes an arbitrary point in the cartesian plane. Closed-form. TCI TDOA Systems. The center, focus, and vertex all lie on the horizontal line y = 3 (that is, they're side by side on a line paralleling the x-axis), so the branches must be side by side, and the x part of the equation must be added. Calculates principal, principal plus interest, rate or time using the standard compound interest formula A = P(1 + r/n)^nt. Since the location of the emitter is 3-D, the. Aplikace vám nadále pomohou vyhnout se davům. Moreover, it should be highlighted that TOA localization. COP 301 and 305 are the true emitter circles-of-position, and this is determined by the common intersection 309 with the TDOA hyperbola limb 303. Each TDOA measurement equation corresponds to one hyperbola/hyperboloid in 2-dimensional (2D) plane or the TDOA measurement noise and the sensor position errors. Then, we propose an interaction algorithm that mutually supplies the undefined axis coordinate of users among 2D TDOAs. $c$ is a (strictly) positive constant. The Hyperbola Project is a community driven effort to provide a fully free (as in freedom) operating system that is stable, secure, simple, lightweight that tries to Keep It Simple Stupid (KISS) with Long. based on a geo-location method called Time Delay Of Arrival (TDOA). Parametric equation of the hyperbola In the construction of the hyperbola, shown in the below figure, circles of radii a and b are intersected by an arbitrary line through the origin at points M and N. The General Equation of the Hyperbola. In the Hardy-Weinberg equation, the. Elementary Linear Algebra Textbook. Just as with ellipses, writing the equation for a hyperbola in standard form allows us to calculate the key features: its center, vertices, co-vertices, foci, asymptotes, and the lengths and positions of the transverse and conjugate axes. Regression Equations for Calculation of Z Scores of Cardiac Structures in a Large Cohort of Healthy Infants, Children, and Adolescents: An Echocardiographic Study. Use x as your variable. Every hyperbola has two asymptotes. | CommonCrawl |
Return to Revisions
1. Perturbations
As already mentioned, the Jahn-Teller effect has its roots in group theory. The essence of the argument is that the energy of the compound is stabilised upon distortion to a lower-symmetry point group. This distortion may be considered to be a normal mode of vibration, with the corresponding vibrational coordinate $q$ labelling the "extent of distortion". There is one condition on the vibrational mode: it cannot transform as the totally symmetric irreducible representation of the molecular point group, as such a vibrational mode cannot bring about any distortion in the molecular geometry. $\require{begingroup} \begingroup \newcommand{\En}[1]{E_n^{(#1)}} \newcommand{\ket}[1]{| #1 \rangle} \newcommand{\n}[1]{n^{(#1)}} \newcommand{\md}[0]{\mathrm{d}} \newcommand{\odiff}[2]{\frac{\md #1}{\md #2}}$
In the undistorted geometry (i.e. $q = 0$), the electronic Hamiltonian is denoted $H_0$. The corresponding unperturbed electronic wavefunction is $\ket{\n{0}}$, and the electronic energy is $\En{0}$. We therefore have
$$H_0 \ket{\n{0}} = \En{0}\ket{\n{0}} \tag{1}$$
Here, the Hamiltonian, wavefunction, and energy are all functions of $q$. We can expand them as Taylor series about $q = 0$:
$$\begin{align} H &= H_0 + q \left(\odiff{H}{q}\right) + \frac{q^2}{2}\left(\frac{\md^2 H}{\md q^2}\right) + \cdots \tag{2} \\ \ket{n} &= \ket{\n{0}} + q\ket{\n{1}} + \frac{q^2}{2}\ket{\n{2}} + \cdots \tag{3} \\ E_n &= \En{0} + q\En{1} + \frac{q^2}{2}\En{2} + \cdots \tag{4} \end{align}$$
In the new geometry (i.e. $q \neq 0$), the Schrodinger equation must still be obeyed and therefore
$$H\ket{n} = E_n \ket{n} \tag{5}$$
By substituting in equations $(2)$ through $(4)$ into equation $(5)$, one can compare coefficients of $q$ to reach the results:
$$\begin{align} \En{1} &= \left< \n{0} \middle| \odiff{H}{q} \middle| \n{0} \right> \tag{6} \\ \En{2} &= \left< \n{0} \middle| \frac{\md^2 H}{\md q^2} \middle| \n{0} \right> + 2\sum_{m \neq n}\frac{\left|\left<m^{(0)} \middle|(\md H/\md q)\middle|\n{0} \right>\right|^2}{\En{0} - E_m^{(0)}} \tag{7} \end{align}$$
The derivation of equations $(6)$ and $(7)$ will not be discussed further here.1
Distortions that arise from a negative value of $\En{1}$ are called first-order Jahn-Teller distortions, and distortions that arise from a negative value of $\En{2}$ are called second-order Jahn-Teller distortions.
2. The first-order Jahn-Teller effect
Recall that
$$E_n = \En{0} + q\En{1} + \cdots \tag{8}$$
Therefore, if $\En{1} > 0$, then stabilisation may be attained with a negative value of $q$; if $\En{1} < 0$, then stabilisation may be attained with a positive value of $q$. These simply represent distortions in opposite directions along a vibrational coordinate. A well-known example is the distortion of octahedral $\ce{Cu^2+}$: there are two possible choices, one involving axial compression, and one involving axial elongation. These two distortions arise from movement along the same vibrational coordinate, except that one has $q > 0$ and the other has $q < 0$.
Therefore, in order for there to be a first-order Jahn-Teller distortion, we require that $\En{1} \neq 0$. Within group theory, the condition for the integral to be nonzero is that the integrand must contain a component that transforms as the totally symmetric irreducible representation (TSIR). In other words
$$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_{(\md H/\md q)} \otimes \Gamma_n$$
We can simplify this slightly by noting that the Hamiltonian, $H$, itself transforms as the TSIR. Therefore, $\md H/\md q$ transforms as $\Gamma_q$, and the requirement is that
$$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_q \otimes \Gamma_n$$
In all point groups, for any non-degenerate irrep $\Gamma_n$, $\Gamma_n \otimes \Gamma_n = \Gamma_{\text{TSIR}}$. Therefore, if $\Gamma_n$ is non-degenerate, then
$$\Gamma_n \otimes \Gamma_q \otimes \Gamma_n = \Gamma_q \neq \Gamma_{\text{TSIR}}$$
and the molecule is stable against a first-order Jahn-Teller distortion. Therefore, all closed-shell molecules ($\Gamma_n = \Gamma_{\text{TSIR}}$) do not undergo first-order Jahn-Teller distortions.
However, what happens if $\Gamma_n$ is non-degenerate? Now, the product $\Gamma_n \otimes \Gamma_n$ contains other irreps apart from the TSIR.2 If the molecule possesses a vibrational mode that transforms as one of these irreps, then the direct product $\Gamma_n \otimes \Gamma_q \otimes \Gamma_n$ will contain the TSIR.
In a rather inelegant article,3 Hermann Jahn and Edward Teller did the maths for every important point group and found that:
stability and degeneracy are not possible simultaneously unless the molecule is a linear one...
In other words, if a non-linear molecule has a degenerate ground state, then it is susceptible towards a (first-order) Jahn-Teller distortion.
Take, for example, octahedral $\ce{Cu^2+}$. This has a $\mathrm{^2E_g}$ term symbol (see this question) - which is doubly degenerate. The symmetric direct product $\mathrm{E_g \otimes E_g = A_{1g} + E_g}$. Therefore, if we have a vibrational mode of symmetry $\mathrm{E_g}$ (and we do), then distortion along this vibrational coordinate will occur to give a more stable compound.
What does an $\mathrm{e_g}$ vibrational mode look like? Here is a diagram:4
It's an axial elongation, just as expected! However, there is a slight catch: the vibrational mode is doubly degenerate (the other $\mathrm{e_g}$ mode is not shown), and any linear combination of these two degenerate vibrational modes also transforms as $\mathrm{e_g}$. Therefore, the exact form of the distortion can be any linear combination of these two degenerate modes.
On top of that, there's also no indication of how much distortion there is. That depends on (amongst other things) the value of $\En{1}$, and all we have said is that it is nonzero - we have not said how large it is.
This is what is meant by "impossible to predict the extent or the exact form of the distortion".
3. The second-order Jahn-Teller effect
Pearson has written an article on second-order Jahn-Teller effects.5 To be continued another time
(1) For more details, look up perturbation theory in your quantum mechanics book of choice. In such treatments, the perturbation is usually formulated slightly differently: e.g. $H$ is taken as $H_0 + \lambda V$, and the eigenstates and eigenvalues are expanded as a power series instead of a Taylor series. Notwithstanding that, the principles remain the same.
(2) There is a subtlety in that the symmetric direct product must be taken. For example, in the $D_\mathrm{\infty h}$ point group, we have $\Pi \otimes \Pi = \Sigma^+ + [\Sigma^-] + \Delta$. The antisymmetric direct product $\Sigma^-$ has to be discarded.
(3) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142.
(4) Albright, T. A.; Burdett, J. K.; Whangbo, M.-H. Orbital Interactions in Chemistry, 2nd ed.; Wiley: Hoboken, NJ, 2013.
(5) Pearson, R. G. The second-order Jahn-Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4. | CommonCrawl |
Only show content I have access to (154)
Only show open access (35)
Chapters (11)
Last 3 months (2)
Last 12 months (11)
Last 3 years (60)
Over 3 years (509)
Physics and Astronomy (184)
Materials Research (141)
Statistics and Probability (61)
Earth and Environmental Sciences (18)
MRS Online Proceedings Library Archive (100)
Epidemiology & Infection (60)
Psychological Medicine (35)
The Journal of Agricultural Science (33)
Microscopy and Microanalysis (28)
Journal of Materials Research (27)
Journal of Mechanics (21)
The Journal of Laryngology & Otology (20)
European Psychiatry (17)
Powder Diffraction (12)
Proceedings of the International Astronomical Union (12)
The European Physical Journal - Applied Physics (10)
Journal of Plasma Physics (9)
The Aeronautical Journal (9)
European Astronomical Society Publications Series (8)
Journal of Fluid Mechanics (8)
Laser and Particle Beams (7)
Bulletin of Entomological Research (6)
Epidemiology and Psychiatric Sciences (6)
Parasitology (6)
Cambridge University Press (11)
Materials Research Society (129)
Animal consortium (22)
European Psychiatric Association (17)
BSAS (14)
International Astronomical Union (14)
EAAP (11)
Royal College of Speech and Language Therapists (10)
MSC - Microscopical Society of Canada (9)
Royal Aeronautical Society (9)
Nestle Foundation - enLINK (7)
JLO (1984) Ltd (6)
The Australian Society of Otolaryngology Head and Neck Surgery (6)
AMA Mexican Society of Microscopy MMS (4)
AMMS - Australian Microscopy and Microanalysis Society (4)
Australian Mathematical Society Inc (4)
Global Science Press (4)
MiMi / EMAS - European Microbeam Analysis Society (4)
Ryan Test (4)
Developmental Origins of Health and Disease Society (3)
MAS - Microbeam Analysis Society (3)
Cambridge Handbooks in Psychology (2)
International Hydrology Series (1)
London Mathematical Society Lecture Note Series (1)
World and Regional Geology (1)
Cambridge Handbooks (2)
Cambridge Handbooks of Psychology (2)
Cambridge Prisms (1)
Nonlinear excitation of geodesic acoustic mode by reversed shear Alfvén eigenmodes in non-uniform plasmas
Machine Learning for Plasma Physics and Fusion Energy
Y. Wang, N. Chen, T. Wang, S. Wei, Z. Qiu
Journal: Journal of Plasma Physics / Volume 88 / Issue 6 / December 2022
Published online by Cambridge University Press: 17 November 2022, 895880601
Effects of plasma non-uniformities and kinetic dispersiveness on the spontaneous excitation of geodesic acoustic mode (GAM) by reversed shear Alfvén eigenmode (RSAE) are investigated numerically. It is found that, due to the turning points induced by the shear Alfvén continuum structure, the nonlinear excitation of GAM is a quasiexponentially growing absolute instability. As the radial dependence of GAM frequency and pump RSAE mode structure are accounted for, the radially inward propagating GAM is preferentially excited, leading to core localized thermal plasma heating by GAM collisionless damping. Our work, thus, suggests that GAM excitation plays a crucial role in not only RSAE nonlinear saturation, but also anomalous fuel ion heating in future reactors.
Polymer-coated urea application can increase both grain yield and nitrogen use efficiency in japonica-indica hybrid rice
R. Xu, S. Chen, C.M. Xu, Y.H. Liu, X.F. Zhang, D. Y. Wang, G. Chu
Journal: The Journal of Agricultural Science , First View
Published online by Cambridge University Press: 17 November 2022, pp. 1-9
We investigated whether the one-time application of polymer-coated urea (PCU) before transplanting could simultaneously improve the grain yield and nitrogen use efficiency (NUE) of japonica-indica hybrid rice (JIHR) through a field experiment. The local high-yield JIHR cultivar Chunyou-927 was field grown during the rice-growing seasons in 2019 and 2020. The experiment consisted of three treatments: no nitrogen application (0N), application of conventional urea (CU), and the one-time application of PCU. Grain yield was 1.0–1.3 t/ha higher, and agronomic NUE (kg grain yield increase per kg N applied) was 5.2–5.9 kg/kg higher, respectively, under the PCU treatment compared with the CU treatment across the two study years. When compared with the CU treatment, the PCU treatment could (1) improve root morphological trait, (2) reduce redundant vegetative growth during the early growth period, (3) increase matter production during the mid and late growth period, and (4) increase plant activity during the grain-filling period. Overall, our findings indicate that one-time PCU application before transplanting of the JIHR cultivar holds great promise for increasing grain yield and NUE.
Impact of the COVID-19 pandemic on maternal mental health during pregnancy: The CONCEPTION study – Phase I
A. Berard, A. Lacasse, Y.-H. Gomez, J. Gorgui, S. Côté, S. King, V. Tchuente, F. Muanda, Y. Lumu, I. Boucoiran, A.-M. Nuyt, C. Quach, E. Ferreira, P. Kaul, B. Winquist, K. O'Donnell, S. Eltonsy, D. Château, J.-P. Zhao, G. Hanley, T. Oberlander, B. Kassai, S. Mainbourg, S. Bernatsky, É. Vinet, A. Brodeur-Doucet, J. Demers, P. Richebé, V. Zaphiratos, C. Wang, X. Wang
Journal: European Psychiatry / Volume 65 / Issue S1 / June 2022
Published online by Cambridge University Press: 01 September 2022, pp. S209-S210
Mental health regional differences during pregnancy through the COVID-19 pandemic is understudied.
We aimed to quantify the impact of the COVID-19 pandemic on maternal mental health during pregnancy.
A cohort study with a web-based recruitment strategy and electronic data collection was initiated in 06/2020. Although Canadian women, >18 years were primarily targeted, pregnant women worldwide were eligible. The current analysis includes data on women enrolled 06/2020-11/2020. Self-reported data included mental health measures (Edinburgh Perinatal Depression Scale (EPDS), Generalized Anxiety Disorders (GAD-7)), stress. We compared maternal mental health stratifying on country/continents of residence, and identified determinants of mental health using multivariable regression models.
Of 2,109 pregnant women recruited, 1,932 were from Canada, 48 the United States (US), 73 Europe, 35 Africa, and 21 Asia/Oceania. Mean depressive symptom scores were lower in Canada (EPDS 8.2, SD 5.2) compared to the US (EPDS 10.5, SD 4.8) and Europe (EPDS 10.4, SD 6.5) (p<0.05), regardless of being infected or not. Maternal anxiety, stress, decreased income and access to health care due to the pandemic were increasing maternal depression. The prevalence of severe anxiety was similar across country/continents. Maternal depression, stress, and earlier recruitment during the pandemic (June/July) were associated with increased maternal anxiety.
In this first international study on the impact of the COVID-19 pandemic, CONCEPTION has shown significant country/continent-specific variations in depressive symptoms during pregnancy, whereas severe anxiety was similar regardless of place of residence. Strategies are needed to reduce COVID-19's mental health burden in pregnancy.
No significant relationships.
Prediction and copy number variation identification of ZNF146 gene related to growth traits in Chinese cattle
X. T. Ding, X. Liu, X. M. Li, Y. F. Wen, J. W. Xu, W. J. Liu, Z. M. Li, Z. J. Zhang, Y. N. Chai, H. L. Wang, B. W. Cheng, S. H. Liu, B. Hou, Y. J. Huang, J. G. Li, L. J. Li, G. J. Yang, Z. F. Qi, F. Y. Chen, Q. T. Shi, E. Y. Wang, C. Z. Lei, H. Chen, B. R. Ru, Y. Z. Huang
Journal: The Journal of Agricultural Science / Volume 160 / Issue 5 / October 2022
The great demographic pressure brings tremendous volume of beef demand. The key to solve this problem is the growth and development of Chinese cattle. In order to find molecular markers conducive to the growth and development of Chinese cattle, sequencing was used to determine the position of copy number variations (CNVs), bioinformatics analysis was used to predict the function of ZNF146 gene, real-time fluorescent quantitative polymerase chain reaction (qPCR) was used for CNV genotyping and one-way analysis of variance was used for association analysis. The results showed that there exists CNV in Chr 18: 47225201-47229600 (5.0.1 version) of ZNF146 gene through the early sequencing results in the laboratory and predicted ZNF146 gene was expressed in liver, skeletal muscle and breast cells, and was amplified or overexpressed in pancreatic cancer, which promoted the development of tumour through bioinformatics. Therefore, it is predicted that ZNF146 gene affects the proliferation of muscle cells, and then affects the growth and development of cattle. Furthermore, CNV genotyping of ZNF146 gene was three types (deletion type, normal type and duplication type) by Real-time fluorescent quantitative PCR (qPCR). The association analysis results showed that ZNF146-CNV was significantly correlated with rump length of Qinchuan cattle, hucklebone width of Jiaxian red cattle and heart girth of Yunling cattle. From the above results, ZNF146-CNV had a significant effect on growth traits, which provided an important candidate molecular marker for growth and development of Chinese cattle.
Acceleration of 60 MeV proton beams in the commissioning experiment of the SULF-10 PW laser
On the Cover of HPL
A. X. Li, C. Y. Qin, H. Zhang, S. Li, L. L. Fan, Q. S. Wang, T. J. Xu, N. W. Wang, L. H. Yu, Y. Xu, Y. Q. Liu, C. Wang, X. L. Wang, Z. X. Zhang, X. Y. Liu, P. L. Bai, Z. B. Gan, X. B. Zhang, X. B. Wang, C. Fan, Y. J. Sun, Y. H. Tang, B. Yao, X. Y. Liang, Y. X. Leng, B. F. Shen, L. L. Ji, R. X. Li, Z. Z. Xu
Journal: High Power Laser Science and Engineering / Volume 10 / 2022
Published online by Cambridge University Press: 03 August 2022, e26
We report the experimental results of the commissioning phase in the 10 PW laser beamline of the Shanghai Superintense Ultrafast Laser Facility (SULF). The peak power reaches 2.4 PW on target without the last amplifying during the experiment. The laser energy of 72 ± 9 J is directed to a focal spot of approximately 6 μm diameter (full width at half maximum) in 30 fs pulse duration, yielding a focused peak intensity around 2.0 × 1021 W/cm2. The first laser-proton acceleration experiment is performed using plain copper and plastic targets. High-energy proton beams with maximum cut-off energy up to 62.5 MeV are achieved using copper foils at the optimum target thickness of 4 μm via target normal sheath acceleration. For plastic targets of tens of nanometers thick, the proton cut-off energy is approximately 20 MeV, showing ring-like or filamented density distributions. These experimental results reflect the capabilities of the SULF-10 PW beamline, for example, both ultrahigh intensity and relatively good beam contrast. Further optimization for these key parameters is underway, where peak laser intensities of 1022–1023 W/cm2 are anticipated to support various experiments on extreme field physics.
Anterior cingulate glutamate levels associate with functional activation and connectivity during sensory integration in schizophrenia: a multimodal 1H-MRS and fMRI study
Xin-lu Cai, Cheng-cheng Pu, Shu-zhe Zhou, Yi Wang, Jia Huang, Simon S. Y. Lui, Arne Møller, Eric F. C. Cheung, Kristoffer H. Madsen, Rong Xue, Xin Yu, Raymond C. K. Chan
Journal: Psychological Medicine , First View
Published online by Cambridge University Press: 06 July 2022, pp. 1-11
Glutamatergic dysfunction has been implicated in sensory integration deficits in schizophrenia, yet how glutamatergic function contributes to behavioural impairments and neural activities of sensory integration remains unknown.
Fifty schizophrenia patients and 43 healthy controls completed behavioural assessments for sensory integration and underwent magnetic resonance spectroscopy (MRS) for measuring the anterior cingulate cortex (ACC) glutamate levels. The correlation between glutamate levels and behavioural sensory integration deficits was examined in each group. A subsample of 20 pairs of patients and controls further completed an audiovisual sensory integration functional magnetic resonance imaging (fMRI) task. Blood Oxygenation Level Dependent (BOLD) activation and task-dependent functional connectivity (FC) were assessed based on fMRI data. Full factorial analyses were performed to examine the Group-by-Glutamate Level interaction effects on fMRI measurements (group differences in correlation between glutamate levels and fMRI measurements) and the correlation between glutamate levels and fMRI measurements within each group.
We found that schizophrenia patients exhibited impaired sensory integration which was positively correlated with ACC glutamate levels. Multimodal analyses showed significantly Group-by-Glutamate Level interaction effects on BOLD activation as well as task-dependent FC in a 'cortico-subcortical-cortical' network (including medial frontal gyrus, precuneus, ACC, middle cingulate gyrus, thalamus and caudate) with positive correlations in patients and negative in controls.
Our findings indicate that ACC glutamate influences neural activities in a large-scale network during sensory integration, but the effects have opposite directionality between schizophrenia patients and healthy people. This implicates the crucial role of glutamatergic system in sensory integration processing in schizophrenia.
Cooperative guidance for active defence based on line-of-sight constraint under a low-speed ratio
S. Liu, Y. Wang, Y. Li, B. Yan, T. Zhang
Journal: The Aeronautical Journal , First View
Published online by Cambridge University Press: 08 June 2022, pp. 1-19
In this study, an active defence cooperative guidance (ADCG) law that enables cheap and low-speed airborne defence missiles with low manoeuverability to accurately intercept fast and expensive attack missiles with high manoeuverability was designed to enhance the capability of aircraft for active defence. This guidance law relies on the line-of-sight (LOS) guidance method, and it realises active defence by adjusting the geometric LOS relationship involving an attack missile, a defence missile and an aircraft. We use a nonlinear integral sliding surface and an improved second-order sliding mode reaching law to design the guidance law. This can not only reduce the chattering phenomenon in the guidance command, but it can also ensure that the system can reach the sliding surface from any initial position in a finite time. Simulations were carried out to verify the proposed law using four cases: different manoeuvering modes of the aircraft, different speed ratios of the attack and defence missiles, different reaching laws applied to the ADCG law and a robustness analysis. The results show that the proposed guidance law can enable a defence missile to intercept an attack missile by simultaneously using information about the relative motions of the attack missile and the aircraft. It is also highly robust in the presence of errors and noise.
Opposition control of turbulent spots
Y.X. Wang, K.-S. Choi, M. Gaster, C. Atkin, V. Borodulin, Y. Kachanov
Journal: Journal of Fluid Mechanics / Volume 943 / 25 July 2022
Published online by Cambridge University Press: 06 June 2022, A3
Opposition control of artificially initiated turbulent spots in a laminar boundary layer was carried out in a low-turbulence wind tunnel with the aim to delay transition to turbulence by modifying the turbulent structure within the turbulent spots. The timing and duration of control, which was carried out using wall-normal jets from a spanwise slot, were pre-determined based on the baseline measurements of the transitional boundary layer. The results indicated that the high-speed region of the turbulent spots was cancelled by opposition control, which was replaced by a carpet of low-speed fluid. The application of the variable-interval time-averaging technique on the velocity fluctuation signals demonstrated a reduction in both the burst duration and intensity within the turbulent spots, but the burst frequency was increased.
Effects of nitrogen application rates on the spatio-temporal variation of leaf SPAD readings on the maize canopy
Y. Y. Li, B. Ming, P. P. Fan, Y. Liu, K. R. Wang, P. Hou, S. K. Li, R. Z. Xie
Journal: The Journal of Agricultural Science / Volume 160 / Issue 1-2 / February 2022
Published online by Cambridge University Press: 16 March 2022, pp. 32-44
The spatio-temporal variation of leaf chlorophyll content is an important crop phenotypic trait that is of great significance for evaluating crop productivity. This study used a soil-plant analysis development (SPAD) chlorophyll meter for non-destructive monitoring of leaf chlorophyll dynamics to characterize the patterns of spatio-temporal variation in the nutritional status of maize (Zea mays L.) leaves under three nitrogen treatments in two cultivars. The results showed that nitrogen levels could affect the maximum leaf SPAD reading (SPADmax) and the duration of high SPAD reading. A rational model was used to measure the changes in SPAD readings over time in single leaves. This model was suitable for predicting the dynamics of the nutrient status for each leaf position under different nitrogen treatments, and model parameter values were position dependent. SPADmax at each leaf decreased with the reduction of nitrogen supply. Leaves at different positions in both cultivars responded differently to higher nitrogen rates. Lower leaves (8th–10th positions) were more sensitive than the other leaves in response to nitrogen. Monitoring the SPAD reading dynamic of lower leaves could accurately characterize and assess the nitrogen supply in plants. The lower leaves in nitrogen-deficient plants had a shorter duration of high SPAD readings compared to nitrogen-sufficient plants; this physiological mechanism should be studied further. In summary, the spatio-temporal variation of plant nitrogen status in maize was analysed to determine critical leaf positions for potentially assisting in the identification of appropriate agronomic management practices, such as the adjustment of nitrogen rates in late fertilization.
Clinical analysis of relapsing polychondritis with airway involvement
S-Y Zhai, R-Y Guo, C Zhang, C-M Zhang, H-Y Yin, B-Q Wang, S-X Wen
Journal: The Journal of Laryngology & Otology / Volume 137 / Issue 1 / January 2023
Published online by Cambridge University Press: 02 February 2022, pp. 96-100
To identify the clinical characteristics, treatment, and prognosis of relapsing polychondritis patients with airway involvement.
Twenty-eight patients with relapsing polychondritis, hospitalised in the First Hospital of Shanxi Medical University between April 2011 and April 2021, were retrospectively analysed.
Fifty per cent of relapsing polychondritis patients with airway involvement had a lower risk of ear and ocular involvement. Relapsing polychondritis patients with airway involvement had a longer time-to-diagnosis (p < 0.001), a poorer outcome following glucocorticoid combined with immunosuppressant treatment (p = 0.004), and a higher recurrence rate than those without airway involvement (p = 0.004). The rates of positive findings on chest computed tomography and bronchoscopy in relapsing polychondritis patients with airway involvement were 88.9 per cent and 85.7 per cent, respectively. Laryngoscopy analysis showed that 66.7 per cent of relapsing polychondritis patients had varying degrees of mucosal lesions.
For relapsing polychondritis patients with airway involvement, drug treatment should be combined with local airway management.
Molecular signalling involved in upper airway remodelling is enhanced in patients with obstructive sleep apnoea
C-C Lin, Y-P Wang, C-H Chiu, Y-K Sun, M-W Lin, I-S Tzeng
Journal: The Journal of Laryngology & Otology / Volume 136 / Issue 11 / November 2022
Published online by Cambridge University Press: 10 January 2022, pp. 1096-1104
This study aimed to elucidate whether molecular signalling involved in upper airway remodelling is enhanced in patients with obstructive sleep apnoea.
Twenty patients with mild obstructive sleep apnoea (control group) and 40 patients with moderate to severe obstructive sleep apnoea (obstructive sleep apnoea group) who desired uvulopalatopharyngoplasty were recruited for the study. After uvulopalatopharyngoplasty, surgical specimens of the uvula were subjected to haematoxylin and eosin, Masson's trichrome and immunohistochemical staining. Western blot and reverse transcriptase-polymerase chain reaction were used to evaluate the protein and messenger RNA expressions.
The obstructive sleep apnoea group showed more severe inflammation, increased collagen deposition and higher immunohistochemical staining intensity for TGF-ß and MMP-9 as well as higher protein and messenger RNA expression of MMP-9, VEGF, TGF-ß, p38 MAPK, SMAD 2/3, AKT and JNK in the uvula than control group.
Patients with obstructive sleep apnoea demonstrated more severe inflammation, increased airway remodelling, and increased protein and messenger RNA expression of pro-inflammatory and pro-fibrotic cytokines in the uvula than control participants.
The variability of maize kernel drying: sowing date, harvest scenario and year
Z. F. Huang, L. Y. Hou, J. Xue, K. R. Wang, R. Z. Xie, P. Hou, B. Ming, S. K. Li
Journal: The Journal of Agricultural Science / Volume 159 / Issue 7-8 / September 2021
Published online by Cambridge University Press: 21 December 2021, pp. 535-543
The extent of the reduction of maize (Zea mays L.) kernel moisture content through drying is closely related to field temperature (or accumulated temperature; AT) following maturation. In 2017 and 2018, we selected eight maize hybrids that are widely planted in Northeastern China to construct kernel drying prediction models for each hybrid based on kernel drying dynamics. In the traditional harvest scenario using the optimal sowing date (OSD), maize kernels underwent drying from 4th September to 5th October, with variation coefficients of 1.0–1.9. However, with a latest sowing date (LSD), drying occurred from 14th September to 31st October, with variation coefficients of 1.3–3.0. In the changed harvest scenario, the drying time of maize sown on the OSD condition was from 12th September to 9th November with variation coefficients of 1.3–3.0, while maize sown on the LSD had drying dates of 26th September to 28th October with variation coefficients of 1.5–3.6. In the future harvest scenario, the Fengken 139 (FK139) and Jingnongke 728 (JNK728) hybrids finished drying on 20th October and 8th November, respectively, when sown on the OSD and had variation coefficients of 2.7–2.8. Therefore, the maize kernel drying time was gradually delayed and was associated with an increased demand for AT ⩾ 0°C late in the growing season. Furthermore, we observed variation among different growing seasons likely due to differences in weather patterns, and that sowing dates impact variations in drying times to a greater extent than harvest scenarios.
Simultaneous bilateral transcutaneous bone conduction device implantation: sound localisation and speech perception in children with bilateral conductive hearing loss
P Chen, L Yang, J Yang, D Wang, Y Li, C Zhao, Y Liu, M Gao, J Zhu, S Li, S Zhao
Journal: The Journal of Laryngology & Otology / Volume 136 / Issue 10 / October 2022
Published online by Cambridge University Press: 13 October 2021, pp. 939-946
Print publication: October 2022
This study investigated the audiometric and sound localisation results in patients with conductive hearing loss after bilateral Bonebridge implantation.
Eight patients with congenital microtia and atresia supplied with bilateral Bonebridge devices were enrolled in this study. Hearing tests and sound localisation were tested under unaided, unilateral and bilateral aided conditions.
Mean functional gain was higher with a bilateral fitting than with a unilateral fitting, especially at 1.0–4.0 kHz (p < 0.05, both). The improvement in speech reception threshold in noise with a bilateral fitting was a 2.3 dB higher signal-to-noise ratio compared with unilateral fitting (p < 0.05). Bilateral fitting had better sound localisation than unilateral fitting (p <0.001). Four participants who attended follow up showed improved sound localisation ability after one year.
Patients demonstrated better hearing threshold, speech reception thresholds in noise and directional hearing with bilateral Bonebridge devices than with a unilateral Bonebridge device. Sound localisation ability with bilateral Bonebridge devices can be improved through long-term training.
Music as social bonding: A cross-cultural perspective
Ivan Yifan Zou, William S.-Y. Wang
Journal: Behavioral and Brain Sciences / Volume 44 / 2021
Published online by Cambridge University Press: 30 September 2021, e95
We extend Savage et al.'s music and social bonding hypothesis by examining it in the context of Chinese music. First, top-down functions such as music as political instrument should receive more attention. Second, solo performance can serve as important cues for social identity. Third, a right match between the tones in lyrics and music contributes also to social bonding.
Cardiac responses in paediatric Pompe disease in the ADVANCE patient cohort
Barry J. Byrne, Steven D. Colan, Priya S. Kishnani, Meredith C. Foster, Susan E. Sparks, James B. Gibson, Kristina An Haack, David W. Stockton, Loren D. M. Peña, Si Houn Hahn, Judith Johnson, Pranoot X. Tanpaiboon, Nancy D. Leslie, David Kronn, Richard E. Hillman, Raymond Y. Wang
Journal: Cardiology in the Young / Volume 32 / Issue 3 / March 2022
Pompe disease results from lysosomal acid α-glucosidase deficiency, which leads to cardiomyopathy in all infantile-onset and occasional late-onset patients. Cardiac assessment is important for its diagnosis and management. This article presents unpublished cardiac findings, concomitant medications, and cardiac efficacy and safety outcomes from the ADVANCE study; trajectories of patients with abnormal left ventricular mass z score at enrolment; and post hoc analyses of on-treatment left ventricular mass and systolic blood pressure z scores by disease phenotype, GAA genotype, and "fraction of life" (defined as the fraction of life on pre-study 160 L production-scale alglucosidase alfa). ADVANCE evaluated 52 weeks' treatment with 4000 L production-scale alglucosidase alfa in ≥1-year-old United States of America patients with Pompe disease previously receiving 160 L production-scale alglucosidase alfa. M-mode echocardiography and 12-lead electrocardiography were performed at enrolment and Week 52. Sixty-seven patients had complete left ventricular mass z scores, decreasing at Week 52 (infantile-onset patients, change −0.8 ± 1.83; 95% confidence interval −1.3 to −0.2; all patients, change −0.5 ± 1.71; 95% confidence interval −1.0 to −0.1). Patients with "fraction of life" <0.79 had left ventricular mass z score decreasing (enrolment: +0.1 ± 3.0; Week 52: −1.1 ± 2.0); those with "fraction of life" ≥0.79 remained stable (enrolment: −0.9 ± 1.5; Week 52: −0.9 ± 1.4). Systolic blood pressure z scores were stable from enrolment to Week 52, and no cohort developed systemic hypertension. Eight patients had Wolff–Parkinson–White syndrome. Cardiac hypertrophy and dysrhythmia in ADVANCE patients at or before enrolment were typical of Pompe disease. Four-thousand L alglucosidase alfa therapy maintained fractional shortening, left ventricular posterior and septal end-diastolic thicknesses, and improved left ventricular mass z score.
Trial registry: ClinicalTrials.gov Identifier: NCT01526785 https://clinicaltrials.gov/ct2/show/NCT01526785.
Social Media Statement: Post hoc analyses of the ADVANCE study cohort of 113 children support ongoing cardiac monitoring and concomitant management of children with Pompe disease on long-term alglucosidase alfa to functionally improve cardiomyopathy and/or dysrhythmia.
Direct acceleration of an annular attosecond electron slice driven by near-infrared Laguerre–Gaussian laser
C. Jiang, W. P. Wang, S. Weber, H. Dong, Y. X. Leng, R. X. Li, Z. Z. Xu
Journal: High Power Laser Science and Engineering / Volume 9 / 2021
Published online by Cambridge University Press: 26 May 2021, e44
A new near-infrared direct acceleration mechanism driven by Laguerre–Gaussian laser is proposed to stably accelerate and concentrate electron slice both in longitudinal and transversal directions in vacuum. Three-dimensional simulations show that a 2-μm circularly polarized ${\mathrm{LG}}_p^l$ (p = 0, l = 1, σz = −1) laser can directly manipulate attosecond electron slices in additional dimensions (angular directions) and give them annular structures and angular momentums. These annular vortex attosecond electron slices are expected to have some novel applications such as in the collimation of antiprotons in conventional linear accelerators, edge-enhancement electron imaging, structured X-ray generation, and analysis and manipulation of nanomaterials.
Early development of artificially initiated turbulent spots
Y. X. Wang, K.-S. Choi, M. Gaster, C. Atkin, V. Borodulin, Y. Kachanov
Journal: Journal of Fluid Mechanics / Volume 916 / 10 June 2021
Published online by Cambridge University Press: 06 April 2021, A1
An experimental investigation was carried out in a low-turbulence wind tunnel to study the early development of artificially initiated turbulent spots in a laminar boundary layer over a flat plate. The reproducibility of the experiments allowed us to observe fine structural details that have not been observed previously. Initial velocity disturbances quickly developed into hairpin-like structures that multiplied downstream, which increased the width, length and height of the incipient turbulent spots. Only those disturbances that were greater than a threshold value developed into turbulent spots while the others decayed. The rate of development was also affected by the duration of the initial disturbances. We found that the behaviour of turbulence generation within a turbulent spot is similar to the burst events in the turbulent boundary layer, where ejection events are followed by sweep events.
Multi-agent cooperative multi-model adaptive guidance law
S.B. Wang, S.C. Wang, Z.G. Liu, S. Zhang, Y. Guo
Journal: The Aeronautical Journal / Volume 125 / Issue 1288 / June 2021
Published online by Cambridge University Press: 04 March 2021, pp. 1103-1129
A multi-agent engagement scenario is considered in which a high-value aircraft launches two defenders to intercept two homing missiles aimed at the aircraft. Under the assumption that all aircrafts have first-order linear dynamic characteristics, a combined multiple-mode adaptive estimation (MMAE) and a two-way cooperative optimal guidance law are proposed for the target–defenders team. Considering the full cooperation of the target and both the two defenders, the two-way cooperative strategies provide the analytical expressions for their optimal control input, enabling the target–defenders team to intercept the missiles with minimal control effort. To successfully intercept the missiles, MMAE is used to identify the guidance laws adopted by the missiles and estimate their states. The simulation results show that the target cooperating with the defenders to perform lure manoeuvres for the missiles can improve the guidance performance of the defenders as well as reduce the control effort of the defenders for intercepting the missiles.
The impact of COVID-19 on subthreshold depressive symptoms: a longitudinal study
Y. H. Liao, B. F. Fan, H. M. Zhang, L. Guo, Y. Lee, W. X. Wang, W. Y. Li, M. Q. Gong, L. M. W. Lui, L. J. Li, C. Y. Lu, R. S. McIntyre
Journal: Epidemiology and Psychiatric Sciences / Volume 30 / 2021
Published online by Cambridge University Press: 15 February 2021, e20
The coronavirus disease 2019 (COVID-19) pandemic represents an unprecedented threat to mental health. Herein, we assessed the impact of COVID-19 on subthreshold depressive symptoms and identified potential mitigating factors.
Participants were from Depression Cohort in China (ChiCTR registry number 1900022145). Adults (n = 1722) with subthreshold depressive symptoms were enrolled between March and October 2019 in a 6-month, community-based interventional study that aimed to prevent clinical depression using psychoeducation. A total of 1506 participants completed the study in Shenzhen, China: 726 participants, who completed the study between March 2019 and January 2020 (i.e. before COVID-19), comprised the 'wave 1' group; 780 participants, who were enrolled before COVID-19 and completed the 6-month endpoint assessment during COVID-19, comprised 'wave 2'. Symptoms of depression, anxiety and insomnia were assessed at baseline and endpoint (i.e. 6-month follow-up) using the Patient Health Questionnaire-9 (PHQ-9), Generalised Anxiety Disorder-7 (GAD-7) and Insomnia Severity Index (ISI), respectively. Measures of resilience and regular exercise were assessed at baseline. We compared the mental health outcomes between wave 1 and wave 2 groups. We additionally investigated how mental health outcomes changed across disparate stages of the COVID-19 pandemic in China, i.e. peak (7–13 February), post-peak (14–27 February), remission plateau (28 February−present).
COVID-19 increased the risk for three mental outcomes: (1) depression (odds ratio [OR] = 1.30, 95% confidence interval [CI]: 1.04–1.62); (2) anxiety (OR = 1.47, 95% CI: 1.16–1.88) and (3) insomnia (OR = 1.37, 95% CI: 1.07–1.77). The highest proportion of probable depression and anxiety was observed post-peak, with 52.9% and 41.4%, respectively. Greater baseline resilience scores had a protective effect on the three main outcomes (depression: OR = 0.26, 95% CI: 0.19–0.37; anxiety: OR = 1.22, 95% CI: 0.14–0.33 and insomnia: OR = 0.18, 95% CI: 0.11–0.28). Furthermore, regular physical activity mitigated the risk for depression (OR = 0.79, 95% CI: 0.79–0.99).
The COVID-19 pandemic exerted a highly significant and negative impact on symptoms of depression, anxiety and insomnia. Mental health outcomes fluctuated as a function of the duration of the pandemic and were alleviated to some extent with the observed decline in community-based transmission. Augmenting resiliency and regular exercise provide an opportunity to mitigate the risk for mental health symptoms during this severe public health crisis.
Alcohol, coffee and tea intake and the risk of cognitive deficits: a dose–response meta-analysis
L. S. Ran, W. H. Liu, Y. Y. Fang, S. B. Xu, J. Li, X. Luo, D. J. Pan, M. H. Wang, W. Wang
Lifestyle interventions are an important and viable approach for preventing cognitive deficits. However, the results of studies on alcohol, coffee and tea consumption in relation to cognitive decline have been divergent, likely due to confounds from dose–response effects. This meta-analysis aimed to find the dose–response relationship between alcohol, coffee or tea consumption and cognitive deficits.
Prospective cohort studies or nested case-control studies in a cohort investigating the risk factors of cognitive deficits were searched in PubMed, Embase, the Cochrane and Web of Science up to 4th June 2020. Two authors searched the databases and extracted the data independently. We also assessed the quality of the studies with the Newcastle-Ottawa scale. Stata 15.0 software was used to perform model estimation and plot the linear or nonlinear dose–response relationship graphs.
The search identified 29 prospective studies from America, Japan, China and some European countries. The dose–response relationships showed that compared to non-drinkers, low consumption (<11 g/day) of alcohol could reduce the risk of cognitive deficits or only dementias, but there was no significant effect of heavier drinking (>11 g/day). Low consumption of coffee reduced the risk of any cognitive deficit (<2.8 cups/day) or dementia (<2.3 cups/day). Green tea consumption was a significant protective factor for cognitive health (relative risk, 0.94; 95% confidence intervals, 0.92–0.97), with one cup of tea per day brings a 6% reduction in risk of cognitive deficits.
Light consumption of alcohol (<11 g/day) and coffee (<2.8 cups/day) was associated with reduced risk of cognitive deficits. Cognitive benefits of green tea consumption increased with the daily consumption. | CommonCrawl |
A stick 5 cm long, a stick 9 cm long, and a third stick $n$ cm long form a triangle. What is the sum of all possible whole number values of $n$?
Using the Triangle Inequality, we see that $n > 4$ and $n < 14,$ so $n$ can be any integer from $5$ to $13,$ inclusive. The sum can be calculated in several ways, but regardless, $5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 = \boxed{81}.$ | Math Dataset |
How could time only have started with the Big Bang?
I understand that before the Big Bang time (as well as dimensions) didn't exist. But how could this be? If there was no time then nothing could change, and so time itself couldn't come into existence.
big-bang-theory time
StormtrooperStormtrooper
$\begingroup$ Nobody has a clue. "If there was no time then nothing could change, and so time itself couldn't come into existence." That's a great thought - but it could go either way. You could say "time didn't exist before time existed, so it could have started 'any time' or 'every time'" :O $\endgroup$
– Fattie
$\begingroup$ Time is a relative concept. There might be time before the big bang. But, we set time = 0 at the Big Bang. Also, note that the existence of the Big Bang is suggested by theories and observations. Although how much we love the Big Bang (pun intended), it may or may not be correct. $\endgroup$
– Kornpob Bhirombhakdi
$\begingroup$ Probably better for Physics. I've heard it said several times that time began with the big bang. A more detailed explanation of that statement might be helpful, if it can be put in layman's terms. $\endgroup$
$\begingroup$ I like to think of "what happened before the big bang?" as analagous to "what is to the North of the North pole?". $\endgroup$
– Steve Linton
$\begingroup$ Kind of involves semantics of "time". $\endgroup$
– Inertial Ignorance
This is both a physical and philosophical question because it depends what you mean by "time".
From a physical perspective, time doesn't really exist. We know that things change, and we measure a rate of change in relation to other things that also change (like clocks) but how we define time as an absolute concept is not something we have a firm answer to. It may not exist at all. To understand the details fo this you need to learn a whole lot of quantum mechanics, string theory and theoretical physics in general, which will take about a decade, two degrees and a doctorate. Alternatively, I've been reading a great book called "The Order of Time" by Carlo Rovelli. He is a theoretical physicist who has worked for decades to understand this stuff and then written all about it with hardly any maths and some drawings of Smurfs. I would highly recommend it.
Also, from a more philosophical perspective, if the Big Bang started with a singularity in which time and space and all matter were compressed into one single point with no dimensions, then that point contained no information about what was before it. If we can never know what was before the singularity (or even if there was anything before), then there is very little point in discussing it, so we can choose to set that as the zero point of time and call it the beginning because anything that happened before that is irrelevant. Like starting a stopwatch to time a race when the cars set off - there was time before, but that's not important for the race so we call the start of the race a time of zero.
FJCFJC
Here's another way to look at it: winding back the age of the Universe towards $t=0$ is a little bit like the reverse of watching (from a distance) an object fall into a black hole.
In the case of a black hole, we can (theoretically) see an object accelerating towards the event horizon. From our distant frame of reference, as it gets closer, relativistic effects result in time appearing to slow down. The closer it gets to the event horizon, the faster it falls but the slower this appears to us, until the object appears to "freeze" right on the event horizon.
Except, our models suggest that's not what actually happens. Instead, it takes an infinite amount of time for the object to arrive at the event horizon, and since we can't ever be at $t=\infty$ (other than in our mathematical modelling), in practical terms we can never see the object actually reach the event horizon.
That's just an example of how time can be considered as asymptotic: as distance approaches 0, time approaches infinity. There's also an asymptotic function in the other time direction: in the Big Bang for example, as time approaches 0, density approaches infinity. To put it somewhat simplistically, some models of physics treat one or both of these situations as mathematical singularities.
Since our cosmological models don't quite work as we wind back the Universe to the earliest moments of the Big Bang – there's no current theory that describes what was happening before $t_P$ or Planck time, at $10^{-43}$ seconds – we simply cannot say what state the Universe was in at $t=0$.
One possibility is that $t=0$ only exists mathematically, just like $t=\infty$ for an object falling into a black hole, and that in our mundane real Universe, it simply isn't possible to wind back time to an actual singularity. But there are other models that treat singularities as real, and this whole area is a very active and highly contested field of research. I'm simply offering one way to conceptualise a possible Universe with no "start" time.
Chappo Hasn't Forgotten MonicaChappo Hasn't Forgotten Monica
So according to general relativity, there are events, in space-time. There are many different ways of putting coordinates on these events so that each one can be said to occur at a particular (x,y,z,t) set of coordinates and the laws of physics all work correctly when you put in those xs, ys, zs and ts, even though many intuitive questions like "did this happen before or after that" may have different answers in different sets of coordinates. All of these coordinate frames, though have something in common, namely that there is a minimum of t. For example in the framework you get by assuming that the Earth is more or less at rest, no event gets a t value less than about -13.8 billion years.
So there was no before the big bang. What is wrong (at least according to GR) is your assumption that there was a "before". It's a little like asking what lies to the North of the North pole. There is just no more "North". Any latitude and longitude-like scheme for putting coordinates on the Earth will have a maximum latitude and there are no points North of that.
Steve LintonSteve Linton
Not the answer you're looking for? Browse other questions tagged big-bang-theory time or ask your own question.
Why did the big bang not just produce a big black hole?
How was all the matter curled up inside a singularity during big bang ?
Question about space-time
Gravitational Waves and the Big bang
Could the big bang have created super massive black holes?
How could the universe be a few light years across just 1 second after the big bang?
If the Higgs field only formed after the Big Bang, how was hydrogen formed? | CommonCrawl |
Classical Mechanics (Goldstein)
Classical Mechanics is a textbook about that subject written by Herbert Goldstein, a professor at Columbia University. Intended for advanced undergraduate and beginning graduate students, it has been one of the standard references in its subject around the world since its first publication in 1950.[1][2]
Classical Mechanics
Front cover of the third edition
AuthorHerbert Goldstein
CountryUnited States of America
LanguageEnglish
SubjectClassical mechanics
GenreNon-fiction
PublisherAddison-Wesley
Publication date
1951, 1980, 2002
Media typePrint
Pages638
ISBN978-0-201-65702-9
Overview
In the second edition, Goldstein corrected all the errors that had been pointed out, added a new chapter on perturbation theory, a new section on Bertrand's theorem, and another on Noether's theorem. Other arguments and proofs were simplified and supplemented.[3]
Before the death of its primary author in 2005, a new (third) edition of the book was released, with the collaboration of Charles P. Poole and John L. Safko from the University of South Carolina.[4] In the third edition, the book discusses at length various mathematically sophisticated reformations of Newtonian mechanics, namely analytical mechanics, as applied to particles, rigid bodies and continua. In addition, it covers in some detail classical electromagnetism, special relativity, and field theory, both classical and relativistic. There is an appendix on group theory. New to the third edition include a chapter on nonlinear dynamics and chaos, a section on the exact solutions to the three-body problem obtained by Euler and Lagrange, a discussion of the damped driven pendulum that explains the Josephson junctions. This is counterbalanced by the reduction of several existing chapters motivated by the desire to prevent this edition from exceeding the previous one in length. For example, the discussions of Hermitian and unitary matrices were omitted because they are more relevant to quantum mechanics rather than classical mechanics, while those of Routh's procedure and time-independent perturbation theory were reduced.[5]
Table of Contents (3rd Edition)
• Preface
• Chapter 1: Survey of Elementary Principles
• Chapter 2: Variational Principles and Lagrange's Equations
• Chapter 3: The Central Force Problem
• Chapter 4: The Kinematics of Rigid Body Motion
• Chapter 5: The Rigid Body Equations of Motion
• Chapter 6: Oscillations
• Chapter 7: The Classical Mechanics of the Special Theory of Relativity
• Chapter 8: The Hamilton Equations of Motion
• Chapter 9: Canonical Transformations
• Chapter 10: Hamilton–Jacobi Theory and Action-Angle Coordinates
• Chapter 11: Classical Chaos
• Chapter 12: Canonical Perturbation Theory
• Chapter 13: Introduction to the Lagrangian and Hamiltonian Formulations for Continuous Systems and Fields
• Appendix A: Euler Angles in Alternate Conventions and Cayley–Klein Parameters
• Appendix B: Groups and Algebras
• Appendix C: Solutions to Select Exercises
• Select Bibliography
• Author Index
• Subject Index
Editions
1. Goldstein, Herbert (1950). Classical Mechanics (1st ed.). Addison-Wesley.
2. Goldstein, Herbert (1951). Classical Mechanics (1st ed.). Addison-Wesley. ASIN B000OL8LOM.
3. Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 978-0-201-02918-5.
4. Goldstein, Herbert; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (3rd ed.). Addison-Wesley. ISBN 978-0-201-65702-9.
Reception
First edition
S.L. Quimby of Columbia University noted that the first half of the first edition of the book is dedicated to the development of Lagrangian mechanics with the treatment of velocity-dependent potentials, which are important in electromagnetism, and the use of the Cayley-Klein parameters and matrix algebra for rigid-body dynamics. This is followed by a comprehensive and clear discussion of Hamiltonian mechanics. End-of-chapter references improve the value of the book. Quimby pointed out that although this book is suitable for students preparing for quantum mechanics, it is not helpful for those interested in analytical mechanics because its treatment omits too much. Quimby praised the quality of printing and binding which make the book attractive.[6]
In the Journal of the Franklin Institute, Rupen Eskergian noted that the first edition of Classical Mechanics offers a mature take on the subject using vector and tensor notations and with a welcome emphasis on variational methods. This book begins with a review of elementary concepts, then introduces the principle of virtual work, constraints, generalized coordinates, and Lagrangian mechanics. Scattering is treated in the same chapter as central forces and the two-body problem. Unlike most other books on mechanics, this one elaborates upon the virial theorem. The discussion of canonical and contact transformations, the Hamilton-Jacobi theory, and action-angle coordinates is followed by a presentation of geometric optics and wave mechanics. Eskergian believed this book serves as a bridge to modern physics.[7]
Writing for The Mathematical Gazette on the first edition, L. Rosenhead congratulated Goldstein for a lucid account of classical mechanics leading to modern theoretical physics, which he believed would stand the test of time alongside acknowledged classics such as E.T. Whittaker's Analytical Dynamics and Arnold Sommerfeld's Lectures on Theoretical Physics. This book is self-contained and is suitable for students who have completed courses in mathematics and physics of the first two years of university. End-of-chapter references with comments and some example problems enhance the book. Rosenhead also liked the diagrams, index, and printing.[8]
Second edition
Concerning the second printing of the first edition, Vic Twersky of the Mathematical Research Group at New York University considered the book to be of pedagogical merit because it explains things in a clear and simple manner, and its humor is not forced. Published in the 1950s, this book replaced the outdated and fragmented treatises and supplements typically assigned to beginning graduate students as a modern text on classical mechanics with exercises and examples demonstrating the link between this and other branches of physics, including acoustics, electrodynamics, thermodynamics, geometric optics, and quantum mechanics. It also has a chapter on the mechanics of fields and continua. At the end of each chapter, there is a list of references with the author's candid reviews of each. Twersky said that Goldstein's Classical Mechanics is more suitable for physicists compared to the much older treatise Analytical Dynamics by E.T. Whittaker, which he deemed more appropriate for mathematicians.[1]
E. W. Banhagel, an instructor from Detroit, Michigan, observed that despite requiring no more than multivariable and vector calculus, the first edition of Classical Mechanics successfully introduces some sophisticated new ideas in physics to students. Mathematical tools are introduced as needed. He believed that the annotated references at the end of each chapter are of great value.[9]
Third edition
Stephen R. Addison from the University of Central Arkansas commented that while the first edition of Classical Mechanics was essentially a treatise with exercises, the third has become less scholarly and more of a textbook. This book is most useful for students who are interested in learning the necessary material in preparation for quantum mechanics. The presentation of most materials in the third edition remain unchanged compared to that of the second, though many of the old references and footnotes were removed. Sections on the relations between the action-angle coordinates and the Hamilton-Jacobi equation with the old quantum theory, wave mechanics, and geometric optics were removed. Chapter 7, which deals with special relativity, has been heavily revised and could prove to be more useful to students who want to study general relativity than its equivalent in previous editions. Chapter 11 provides a clear, if somewhat dated, survey of classical chaos. Appendix B could help advanced students refresh their memories but may be too short to learn from. In all, Addison believed that this book remains a classic text on the eighteenth- and nineteenth-century approaches to theoretical mechanics; those interested in a more modern approach – expressed in the language of differential geometry and Lie groups – should refer to Mathematical Methods of Classical Mechanics by Vladimir Arnold.[4]
Martin Tiersten from the City University of New York pointed out a serious error in the book that persisted in all three editions and even got promoted to the front cover of the book. Such a closed orbit, depicted in a diagram on page 80 (as Figure 3.7) is impossible for an attractive central force because the path cannot be concave away from the center of force. A similarly erroneous diagram appears on page 91 (as Figure 3.13). Tiersten suggested that the reason why this error remained unnoticed for so long is because advanced mechanics texts typically do not use vectors in their treatment of central-force problems, in particular the tangential and normal components of the acceleration vector. He wrote, "Because an attractive force is always directed in toward the center of force, the direction toward the center of curvature at the turning points must be toward the center of force." In response, Poole and Safko acknowledged the error and stated they were working on a list of errata.[2]
See also
• Newtonian mechanics
• Classical Mechanics (Kibble and Berkshire)
• Course of Theoretical Physics (Landau and Lifshitz)
• List of textbooks on classical and quantum mechanics
• Introduction to Electrodynamics (Griffiths)
• Classical Electrodynamics (Jackson)
References
1. Goldstein, Herbert; Twersky, Vic (September 1952). "Classical Mechanics". Physics Today. 5 (9): 19–20. Bibcode:1952PhT.....5i..19G. doi:10.1063/1.3067728.
2. Tiersten, Martin (February 2003). "Errors in Goldstein's Classical Mechanics". American Journal of Physics. American Association of Physics Teachers. 71 (2): 103. Bibcode:2003AmJPh..71..103T. doi:10.1119/1.1533731. ISSN 0002-9505.
3. Goldstein, Herbert (1980). "Preface to the Second Edition". Classical Mechanics. Addison-Wesley. ISBN 0-201-02918-9.
4. Addison, Stephen R. (July 2002). "Classical Mechanics, 3rd ed". American Journal of Physics. 70 (7): 782–3. Bibcode:2002AmJPh..70..782G. doi:10.1119/1.1484149. ISSN 0002-9505.
5. Goldstein, Herbert; Safko, John; Poole, Charles (2002). "Preface to the Third Edition". Classical Mechanics. Addison-Wesley. ISBN 978-0-201-65702-9.
6. Quimby, S.L. (July 21, 1950). "Classical Mechanics by Herbert Goldstein". Book Reviews. Science. American Association for the Advancement of Science (AAAS). 112 (2899): 95. doi:10.1126/science.112.2899.95.a. JSTOR 1678638.
7. Eskergian, Rupen (September 1950). "Classical Mechanics, by Herbert Goldstein". Journal of the Franklin Institute. 250 (3): 273. doi:10.1016/0016-0032(50)90712-5.
8. Rosenhead, L. (February 1951). "Classical Mechanics by Herbert Goldstein". Review. The Mathematical Gazette. The Mathematical Association. 35 (311): 66–7. doi:10.2307/3610571. JSTOR 3610571.
9. Banhagel, E. W. (October 1952). "Classical Mechanics by Herbert Goldstein". Review. The Mathematics Teacher. National Council of Teachers of Mathematics. 45 (6): 485. JSTOR 27954117.
External links
• Errata, corrections, and comments on the third edition. John L. Safko and Charles P. Poole. University of South Carolina.
| Wikipedia |
Aperiodic finite state automaton
An aperiodic finite-state automaton (also called a counter-free automaton) is a finite-state automaton whose transition monoid is aperiodic.
Properties
A regular language is star-free if and only if it is accepted by an automaton with a finite and aperiodic transition monoid. This result of algebraic automata theory is due to Marcel-Paul Schützenberger.[1] In particular, the minimum automaton of a star-free language is always counter-free (however, a star-free language may also be recognized by other automata which are not aperiodic).
A counter-free language is a regular language for which there is an integer n such that for all words x, y, z and integers m ≥ n we have xymz in L if and only if xynz in L. Another way to state Schützenberger's theorem is that star-free languages and counter-free languages are the same thing.
An aperiodic automaton satisfies the Černý conjecture.[2]
References
1. Schützenberger, Marcel-Paul (1965). "On Finite Monoids Having Only Trivial Subgroups" (PDF). Information and Control. 8 (2): 190–194. doi:10.1016/s0019-9958(65)90108-7.
2. Trahtman, Avraham N. (2007). "The Černý conjecture for aperiodic automata". Discrete Math. Theor. Comput. Sci. 9 (2): 3–10. ISSN 1365-8050. Zbl 1152.68461. Archived from the original on 2015-09-23. Retrieved 2014-04-05.
• McNaughton, Robert; Papert, Seymour (1971). Counter-free Automata. Research Monograph. Vol. 65. With an appendix by William Henneman. MIT Press. ISBN 0-262-13076-9. Zbl 0232.94024.
• Sonal Pratik Patel (2010). An Examination of Counter-Free Automata (PDF) (Masters Thesis). San Diego State University. — An intensive examination of McNaughton, Papert (1971).
• Thomas Colcombet (2011). "Green's Relations and their Use in Automata Theory". In Dediu, Adrian-Horia; Inenaga, Shunsuke; Martín-Vide, Carlos (eds.). Proc. Language and Automata Theory and Applications (LATA) (PDF). LNCS. Vol. 6638. Springer. pp. 1–21. ISBN 978-3-642-21253-6. — Uses Green's relations to prove Schützenberger's and other theorems.
Automata theory: formal languages and formal grammars
Chomsky hierarchyGrammarsLanguagesAbstract machines
• Type-0
• —
• Type-1
• —
• —
• —
• —
• —
• Type-2
• —
• —
• Type-3
• —
• —
• Unrestricted
• (no common name)
• Context-sensitive
• Positive range concatenation
• Indexed
• —
• Linear context-free rewriting systems
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• —
• Non-recursive
• Recursively enumerable
• Decidable
• Context-sensitive
• Positive range concatenation*
• Indexed*
• —
• Linear context-free rewriting language
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• Star-free
• Finite
• Turing machine
• Decider
• Linear-bounded
• PTIME Turing Machine
• Nested stack
• Thread automaton
• restricted Tree stack automaton
• Embedded pushdown
• Nondeterministic pushdown
• Deterministic pushdown
• Visibly pushdown
• Finite
• Counter-free (with aperiodic finite monoid)
• Acyclic finite
Each category of languages, except those marked by a *, is a proper subset of the category directly above it. Any language in each category is generated by a grammar and by an automaton in the category in the same line.
| Wikipedia |
Uncountably Infinite Nuts
Oops, missed one
An Excuse to Learn German
Many people know there are more real numbers than counting numbers, and some don't. Some even find the results contentious or unsettling. Here, rather than getting into the social ramifications of this concept, I'd like to run through the operative part of the most well known proof. I will talk about the diagonal argument.
The diagonal argument, as far as I know, was first introduced by Georg Cantor in his 1891 article "Über eine elementare Frage der Mannigfaltigkeitslehre". I wasn't able to find a pdf copy of the original paper, but Jan scared up this excerpt from a book about Cantor and written by Ernst Zermelo in 1932...I think I can trust the source. The 1891 proof wasn't actually Cantor's first proof that real numbers are more numerous than counting numbers, but it is prettier than his 1874 effort.
The construction of the diagonal argument is pretty simple. Consider a gambler, recently relieved of his mortal obligations. When he arrives in hell, the devil delivers his twisted punishment. The ex-man is to flip a coin for the rest of time. In mathematical terms, the results of this eternal obsession would look like: $$\text{game}_1=\text{toss}_{1,1},\text{toss}_{1,2},\ldots,\text{toss}_{1,n},\ldots$$
It so happens, that the entry queue to the wrong side of the Styx is rather long, and Satan has to streamline his process. So, every wayward gambler, for all time, gets the same compulsory trigger thumb (hey, it's original to them). $$ \begin{array}{c} \text{game}_1=\text{toss}_{1,1},\text{toss}_{1,2},\ldots\text{toss}_{1,n},\ldots\ \\ \text{game}_2=\text{toss}_{2,1},\text{toss}_{2,2},\ldots\text{toss}_{2,n},\ldots\ \\ \vdots \\ \text{game}_m=\text{toss}_{m,1},\text{toss}_{m,2},\ldots\text{toss}_{m,n},\ldots\ \\ \vdots \end{array} $$
Now, it would seem perfectly reasonable to think that infinitely many sinners would produce every possible permutation of coin tosses. However, imagine a game where the first toss went the opposite way from the first toss of the first gambler. Then the second toss went different from the second toss of the second gambler. And, the third toss went differently from the third toss of the third gambler, and so on. The game would be different from every game in the list. Meaning, infinitely many games won't account for all possible results.
My favorite part of this proof is the reveal. Up to this point, there has been no need to explicitly evoke real numbers or counting numbers. The argument doesn't need them. However, as anyone who has tried to write the decimal form of $^1/_3$ will testify, the digits of a real number can go on forever. So, it's not too hard to see the sequence of tosses as the fractional digits of a binary number. What's cooler, is that by starting with one game, and then defining another, and another, we use precisely the same construction method as the counting numbers; they're built-in to the problem. The journey from assumptions to theorem isn't a laborious slog, it's over before you realize you've started. Needless to say, I like it.
noneRelated Posts:
Creative Restraint - Page 1
Aristic Expression
Rumble in the Trees of Midcity Park
> Stories
> Arts & Maths
> Math Notebook
> About SGE
> Support
> Resources We Love
The way he treats infinity will SHOCK you.
SECRETS mathematicians won't tell you.
Get the mandelbutt you ALWAYS wanted.
Students test their limits with this one WIERD trick | CommonCrawl |
\begin{document}
\title[Periodic 2CH system]{Periodic conservative solutions for the two-component Camassa--Holm system}
\author[K. Grunert]{Katrin Grunert} \address[K. Grunert]{\newline
Department of Mathematical Sciences\\ Norwegian University of Science and Technology\\ NO-7491 Trondheim\\ Norway} \email{\mailto{[email protected]}} \urladdr{\url{http://www.math.ntnu.no/~katring/}}
\author[H. Holden]{Helge Holden} \address[H. Holden]{\newline
Department of Mathematical Sciences\\
Norwegian University of Science and Technology\\
NO-7491 Trondheim\\ Norway\newline {\rm and} \newline Centre of
Mathematics for Applications\\ University of Oslo\\
NO-0316 Oslo\\ Norway} \email{\mailto{[email protected]}} \urladdr{\url{http://www.math.ntnu.no/~holden/}}
\author[X. Raynaud]{Xavier Raynaud} \address[X. Raynaud]{\newline
Centre of Mathematics for Applications\\
University of Oslo\\ NO-0316 Oslo\\ Norway} \email{\mailto{[email protected]}} \urladdr{\url{http://folk.uio.no/xavierra/}}
\date{\today} \thanks{(Research supported in part by the Research Council of Norway, and the Austrian Science Fund (FWF) under Grant No.~J3147.} \thanks{In: {\em Spectral Analysis, Differential Equations and Mathematical Physics}, Proc. Symp. Pure Math., Amer. Math. Soc. (to appear)} \subjclass[2010]{Primary: 35Q53, 35B35; Secondary: 35B20} \keywords{Two-component Camassa--Holm system, periodic and conservative solutions}
\dedicatory{Dedicated with admiration to Fritz Gesztesy on the occasion of his sixtieth anniversary}
\begin{abstract}
We construct a global continuous semigroup of weak periodic conservative solutions to the two-component Camassa--Holm system, $u_t-u_{txx}+\kappa u_x+3uu_x-2u_xu_{xx}-uu_{xxx}+\eta\rho\rho_x=0$ and $\rho_t+(u\rho)_x=0$, for initial data $(u,\rho)|_{t=0}$ in $H^1_{\rm per}\times L^2_{\rm per}$. It is necessary to augment the system with an associated energy to identify the conservative solution. We study the stability of these periodic solutions by constructing a Lipschitz metric. Moreover, it is proved that if the density $\rho$ is bounded away from zero, the solution is smooth. Furthermore, it is shown that given a sequence $\rho_0^n$ of initial values for the densities that tend to zero, then the associated solutions $u^n$ will approach the global conservative weak solution of the Camassa--Holm equation. Finally it is established how the characteristics govern the smoothness of the solution. \end{abstract} \maketitle
\section{Introduction} In this paper we analyze periodic and conservative weak global solutions of the two-component Camassa--Holm (2CH) system which reads (with $\kappa\in\mathbb{R}$ and $\eta\in(0,\infty)$) \begin{subequations}\label{eq:chkappa00}
\begin{align}
u_t-u_{txx}+\kappa u_x+3uu_x-2u_xu_{xx}-uu_{xxx}+\eta\rho\rho_x&=0,\\
\rho_t+(u\rho)_x&=0.
\end{align} \end{subequations} The special case when $\rho$ vanishes identically reduces the system to the celebrated and well-studied Camassa--Holm (CH) equation, first studied in the seminal paper \cite{CH:93}. The present system was first introduced by Olver and Rosenau in \cite[Eq.~(43)]{OlverRosenau}, and derived in the context of water waves in \cite{MR2474608}, showing $\eta$ positive and $\rho$ nonnegative to be the physically relevant case. Conservative solutions on the full line for the 2CH system have been studied, see, e.g., \cite{GHR2}. However, periodic and conservative solutions for the 2CH system have not been analyzed so far, and this paper aims to fill that gap. It offers some technical challenges that will be described below.
The 2CH system can suitably be rewritten as \begin{subequations}
\label{eq:rewchsys10}
\begin{align}
\label{eq:rewchsys11}
u_t+uu_x+P_x&=0,\\
\label{eq:rewchsys12}
\rho_t+(u\rho)_x&=0,
\end{align} \end{subequations} where $P$ is implicitly defined by \begin{equation}
\label{eq:rewchsys13}
P-P_{xx}=u^2+\kappa u+\frac12u_x^2+\eta\frac12\rho^2. \end{equation}
The reason for the intense study of the CH equation is its surprisingly rich structure. In the context of the present paper, the focus is on the wellposedness of global weak solutions of the Cauchy problem. There is an intrinsic dichotomy in the solution that appears after wave breaking, namely between solutions characterized either by conservation or dissipation of the associated energy. The two classes of solutions are for obvious reasons denoted conservative and dissipative, respectively. The fundamental nature of the problem can be understood by the following pregnant example, for simplicity presented here on the full line, rather than the periodic case. The CH equation with $\kappa=0$ has as special solutions so-called multipeakons given by \begin{equation*}
u(t,x)=\sum_{i=1}^n p_i(t)e^{-\abs{x-q_i(t)}}, \end{equation*} where the $(p_i(t), q_i(t))$ satisfy the explicit system of ordinary differential equations \begin{equation*}
\dot q_i=\sum_{j=1}^{n} p_je^{-\abs{q_i-q_j}},\quad
\dot p_i=\sum_{j=1}^{n} p_ip_j\sgn(q_i-q_j)e^{-\abs{q_i-q_j}}. \end{equation*} In the special case of $n=2$ and $p_1=-p_2$ and $q_1=-q_2<0$ at $t=0$, the solution consists of two ``peaks'', denoted peakons, that approach each other. At time $t=0$ the two peakons annihilate each other, an example of wave breaking, and the solution satisfies $u=u_x=0$ pointwise at that time. For positive time two possibilities exist; one is to let the solution remain equal to zero (the dissipative solution), and other one being that that two peakons reemerge (the conservative solution). A more careful analysis reveals that the $H^1(\mathbb{R})$ norm of $u$ remains finite, while $u_x$ becomes singular, at $t=0$, and there is an accumulation of energy in the form of a Dirac delta-function at the point of annihilation. The consequences for the wellposedness of the Cauchy problem are severe. The continuation of the solution past wave breaking has been studied, see \cite{BreCons:07, BreCons:09,HolRay:07,HolRay:09}. The method to handle the dichotomy is by reformulating the equation in Lagrangian variables, and analyze carefully the behavior in those variables. We will detail this construction later in the introduction.
The 2CH system has, in spite of its brief history, been studied extensively, and it is not possible to include a complete list of references here. However, we mention \cite{WangHuangChen,GHR2}, where a similar approach to the present one, has been employed. The case with $\eta=-1$ has been discussed in \cite{eschlechyin:07}; our approach does not extend to the case of $\eta$ negative. In \cite{GuanYin2010a} it is shown that if the initial density $\rho_0>0$, then the solution exists globally and this result is extended here to a local result, Theorem \ref{th:presreg}, where we show how the characteristics govern the local smoothness. For other related results pertaining to the present system, please see \cite{GuanYin2010a,GuiLiu2011,GuiLiu2010}. There exists other two-component generalizations of the CH equation than the one studied here; see, e.g., \cite{ChenLiu2010, FuQu:09,GuanYin2010,GuoZhou2010,Kuzmin}.
We now turn to the discussion of the present paper. For simplicity we assume that $\eta=1$ and $\kappa=0$. We first make a change from Eulerian to Lagrangian variables and introduce a new energy variable. The change of variables, which we now will detail, is related to the one used in \cite{HolRay:07} and, in particular, \cite{GHR:12}. Assume that $(u,\rho)=(u(x,t),\rho(x,t))$ is a solution of \eqref{eq:chkappa00}, and define the characteristics $y=y(t,\xi)$ by \begin{equation*}
y_t(t,\xi)=u(t,y(t,\xi)) \end{equation*} and the Lagrangian velocity by \begin{equation*}
U(t,\xi)=u(t,y(t,\xi)). \end{equation*} By introducing the Lagrangian energy density $\nu$ and density $r$ by \begin{equation*}
\begin{aligned}
\nu(t,\xi)&=u^2(t,y(t,\xi))y_\xi(t,\xi)+u_x^2(t,y(t,\xi))y_\xi(t,\xi)+\rho^2(t,y(t,\xi))y_\xi(t,\xi), \\
r(t,\xi)&=\rho(t,y(t,\xi))y_\xi(t,\xi),
\end{aligned} \end{equation*} we find that the system can be rewritten as (introducing $\zeta(t,\xi)=y(t,\xi)-\xi$ for technical reasons) \begin{subequations}\label{sys:persys0}
\begin{align}
\zeta_t&=U,\\
U_t&=-Q,\\
\nu_t&=-2QUy_\xi+(3U^2-2P)U_\xi,\\
r_t&=0,
\end{align} \end{subequations} where the functions $P$ and $Q$ are explicitly given by \eqref{eq:Psimp1} and \eqref{eq:Qsimp1}, respectively. We then establish the existence of a unique global solution for this system (see Theorem~\ref{th:global}), and we show that the solutions form a continuous semigroup in an appropriate norm. In order to solve the Cauchy problem \eqref{eq:rewchsys10} we have to choose the initial data appropriately. To accommodate for the possible concentration of energy we augment the natural initial data $u_0$ and $\rho_0$ with a nonnegative Radon measure $\mu_0$ such that the absolutely continuous part $\mu_{0, \text{\rm ac}}$ equals $\mu_{0, \text{\rm ac}}=(u_0^2 +u_{0,x}^2+\rho_0^2)\,dx$. The precise translation of these initial data is given in Theorem \ref{defL}. One then solves the system in Lagrangian coordinates. The translation back to Eulerian variables is described in Definition~\ref{def:FtoE}. However, there is an intrinsic problem in this latter translation if one wants a continuous semigroup. This is due to the problem of \textit{relabeling}; to each solution in Eulerian variables there exist several distinct solutions in Lagrangian variables as there are additional degrees of freedom in the Lagrangian variables. In order to resolve this issue to get a continuous semigroup, one has to identify Lagrangian functions corresponding to one and the same Eulerian solution. This is treated in Theorem~\ref{thm:LMPI}. The main existence theorem, Theorem~\ref{th:main2}, states that for $u_0\in H^1_{\rm per}$ and $\rho_0\in L^2_{\rm per}$ and $\mu_0$ a nonnegative Radon measure with absolutely continuous part $\mu_{0, \text{\rm ac}}$ such that $\mu_{0, \text{\rm ac}}=(u_0^2+u_{0,x}^2+\rho_0^2)\,dx$, there exists a continuous semigroup $T_t$ such that $(u,\rho)$, where $(u,\rho,\mu)(t)=T_t(u_0,\rho_0,\mu_0)$, is a weak global and conservative solution of the 2CH system. In addition, the measure $\mu$ satisfies \begin{equation*}
(\mu)_t+(u\mu)_x=(u^3-2Pu)_x, \end{equation*} weakly. Furthermore, for almost all times the measure $\mu$ is absolutely continuous and $\mu=(u^2+u_{x}^2+\rho^2)\,dx$.
The solution so constructed is not Lipschitz continuous in any of the natural norms, say $H^1$ or $L^p$. Thus it is an intricate problem to identify a metric that deems the solution Lipschitz continuous, see \cite{GHR,GHRb:10}. For a discussion of Lipschitz metrics in the setting of the Hunter--Saxton equation and relevant examples from ordinary differential equations, see \cite{BHR}. The metric we construct here has to distinguish between conservative and dissipative solutions, and it is closely connected with the construction of the semigroup in Lagrangian variables. We commence by defining a metric in Lagrangian coordinates. To that end, let \begin{equation}
\label{eq:defJ0}
J(X_\alpha,X_\beta)=\inf_{f,g\inG}\norm{X_\alpha\bullet f-X_\beta\bullet g}_E. \end{equation} Here $G$ contains the labels used for the relabeling, see Definition \ref{def:group}, and $X\bullet f$ denotes the solution $X$ with label $f$. The function $J$ is invariant with respect to relabeling, yet it is not a metric as it does not satisfy the triangle inequality. Introduce $d(X_\alpha,X_\beta)$ by \begin{equation}
\label{eq:defdist0}
d(X_\alpha,X_\beta)=\inf \sum_{i=1}^NJ(X_{n-1},X_n), \quad X_\alpha,X_\beta\in\ensuremath{\mathcal{F}}, \end{equation} where the infimum is taken over all finite sequences $\{X_n\}_{n=0}^N\in\ensuremath{\mathcal{F}}$ satisfying $X_0=X_\alpha$ and $X_N=X_\beta$. This will be proved to be a Lipschitz metric in Lagrangian variables. Next the metric is transformed into Eulerian variables, and Theorem~\ref{th:main} identifies a metric, denoted $ d_{\ensuremath{\mathcal{D}}^M}$, such that the solution is Lipschitz continuous.
Due to the non-local nature of $P$ in \eqref{eq:rewchsys13}, see \eqref{eq:nonlocal}, information travels with infinite speed. Yet, we show in Theorem \ref{th:presreg} that regularity is a local property in the following precise sense. A solution is said to be $p$-regular, with $p\geq 1$ if \begin{equation*}
u_0\in W^{p,\infty}(x_0,x_1), \quad \rho_0\in W^{p-1,\infty}(x_0,x_1), \quad \text{and } \mu_0=\mu_{0,ac} \text{ on } (x_0,x_1), \end{equation*} and that \begin{equation*}
\rho_0(x)^2\geq c>0 \end{equation*} for $x\in(x_0,x_1)$. If the initial data $(u_0,\rho_0,\mu_0)$ is $p$-regular, then the solution $(u,\rho,\mu)(t,\,\cdot\,)$, for $t\in \mathbb{R}_+=[0,\infty)$, remains $p$-regular on the interval $(y(t,\xi_0),y(t,\xi_1))$, where $\xi_0$ and $\xi_1$ satisfy $y(0,\xi_0)=x_0$ and $y(0,\xi_1)=x_1$ and are defined as \begin{equation*}
\text{$\xi_0=\sup\{\xi\in\mathbb{R}\ |\ y(0,\xi)\leq x_0\}$ and
$\xi_1=\inf\{\xi\in\mathbb{R}\ |\ y(0,\xi)\geq x_1\}$.} \end{equation*}
It is interesting to consider how the standard CH equation is obtained when the density $\rho$ vanishes since the CH equation formally is obtained when $\rho$ is identically zero in the 2CH system. In order to analyze the behavior of the solution, we need to have a sufficiently strong stability result. Consider a sequence of initial data $(u_0^n,\rho_0^n, \mu_0^n)$ such that $u_0^n\to u_0$ in $H^1_{per}$, $\rho_0^n\to 0$ in $L^2_{per}$ with $\rho_0^n\ge d_n>0$ for all $n$. Assume that the initial measure is absolutely continuous, that is, $\mu_0^n=\mu_{0, \text{\rm ac}}^n=((u_{0,x}^n)^2+(\rho_0^n)^2)\,dx$. Then we show in Theorem \ref{th:approxCH} that the sequence $u^n(t)$ converges in $L^\infty_{per}$ to the weak, conservative global solution of the Camassa--Holm equation with initial data $u_0$. To illustrate this result we have plotted, in Figure~\ref{fig:coll}, a peakon anti-peakon solution $u$ of the Camassa--Holm equation (that is, with $\rho$ identically zero) which enjoys wave breaking. In addition, we have plotted the corresponding energy function $u^2+u_x^2$. A closer analysis reveals that at the time $t=t_c$ of wave breaking, all the energy is concentrated at one point, which can be described as (a multiple of) a Dirac delta function. In contrast to that, Figure~\ref{fig:col2} shows that if we choose as initial condition the same peakon anti-peakon function $u_0$ together with $\rho_0(x)=0.5$, then no wave breaking takes place, but at the time $t_c$ where $u(t_c,x)\approx 0$, a considerable part of the energy is transferred from $u^2+u_x^2$ to $\rho^2$, while the total energy $\int_0^1 (u^2+u_x^2+\rho^2)dx$ remains constant.
\begin{figure}
\caption{Plot of a peakon anti-peakon solution $u$ of the CH equation
with $\rho$ identically zero at all times (the
thinner the curve is, the later time it
represents). In this case, we obtain a
conservative solution of the scalar
Camassa--Holm equation. We observe that the
total energy $u^2+u_x^2$ converges to a multiple of a
Dirac delta function at $t=t_c$.}
\label{fig:coll}
\end{figure}
\begin{figure}
\caption{Here we employ the same initial condition as in Figure~\ref{fig:coll} for $u$ while
$\rho_0(x)=0.5$. The total energy
$\int_0^1(u^2+u_x^2+\rho^2)\,dx$ is
preserved. We observe first a concentration of
the part of the energy given by $u^2+u_x^2$. However, as we get closer to $t_c$, there is a transfer of energy from $u^2+u_x^2$ to $\rho^2$.}
\label{fig:col2}
\end{figure} \section{Eulerian and Lagrangian variables}
The two-component Camassa--Holm (2CH) system with $\kappa\in\mathbb{R}$ and $\eta\in(0,\infty)$ reads \begin{subequations}\label{eq:chkappa}
\begin{align}
u_t-u_{txx}+\kappa u_x+3uu_x-2u_xu_{xx}-uu_{xxx}+\eta\rho\rho_x&=0,\\
\rho_t+(u\rho)_x&=0.
\end{align} \end{subequations} If $(u,\rho)$ is a solution of \eqref{eq:chkappa} then the pair $(v,\tau)$, given by $v(t,x)=u(t,x-\alpha t)+\alpha$ and $\tau(t,x)=\sqrt{\beta}\rho(t,x)$, is a solutions to the 2CH system with $\kappa$ and $\eta$ replaced by $\kappa-2\alpha$ and $\frac{\eta}{\beta}$, respectively. Thus we can assume without loss of generality that $\kappa=0$ and $\eta=1$. Moreover, we will only consider the Cauchy problem for initial data, and hence also of solutions, of period unity, i.e., $u(t,x+1)=u(t,x)$ and $\rho(t,x+1)=\rho(t,x)$. All our results carry over with only slight modifications to the case of a general period.
To any pair $(u_0,\rho_0)$ in $H^1_{\rm per}\times L^2_{\rm per}$ we can introduce the corresponding Lagrangian coordinates $(y(0,\xi),U(0,\xi), \nu(0,\xi), r(0,\xi))$ and describe their time evolution using the weak formulation of the 2CH system. Namely, the characteristics $y(t,\xi)$ are defined as solutions of $$ y_t(t,\xi)=u(t,y(t,\xi)) $$ for a given $y(0,\xi)$ such that $y(0,\xi+1)=y(0,\xi)+1$. The Lagrangian velocity $U(t,\xi)$ defined as $$ U(t,\xi)=u(t,y(t,\xi)). $$ The energy derivative reads $$ \nu(t,\xi)=(u^2+u_x^2+\rho^2)(t,y(t,\xi))y_\xi(t,\xi) $$ together with the energy $h(t)=\int_0^1 \nu(t,\xi)d\xi$, and, finally, $$ r(t,\xi)=\rho(t,y(t,\xi))y_\xi(t,\xi) $$ is the Lagrangian density.
Rewriting the 2CH system as \begin{subequations}
\begin{align}
u_t+uu_x+P_x&=0,\\
\rho_t+(u\rho)_x&=0,
\end{align} \end{subequations} where $P=P(t,x)$ implicitly is given as the solution of $P-P_{xx}=u^2+\frac12 u_x^2+\frac12 \rho^2$, enables us to derive how $(y,U,\nu,r)$ change with respect to time. In particular, direct computations yield, after setting $y(t,\xi)=\xi+\zeta(t,\xi)$, that \begin{subequations}\label{sys:persys}
\begin{align}
\zeta_t&=U,\\
U_t&=-Q,\\
\nu_t&=-2QUy_\xi+(3U^2-2P)U_\xi,\\
r_t&=0,
\end{align} \end{subequations} where \begin{align}
\label{eq:Psimp1}
P(t,\xi)&=\frac1{2(e-1)}\int_0^1\cosh(y(t,\xi)-y(t,\eta))(U^2y_\xi+\nu)(t,\eta)\,d\eta\\
&\quad+\frac14\int_0^1\exp\big(-\sgn(\xi-\eta)(y(t,\xi)-y(t,\eta))\big)(U^2y_\xi+\nu)(t,\eta)\,d\eta,\notag \end{align} and \begin{align}
\label{eq:Qsimp1}
Q(t,\xi)&=\frac1{2(e-1)}\int_0^1\sinh(y(t,\xi)-y(t,\eta))(U^2y_\xi+\nu)(t,\eta)\,d\eta \\
\notag &\quad
-\frac14\int_0^1\sgn(\xi-\eta)\exp\big(-\sgn(\xi-\eta)(y(t,\xi)-y(t,\eta))\big)(U^2y_\xi+\nu)(t,\eta)\,d\eta. \end{align} First, we will consider this system of ordinary differential equations in the Banach space $E=W^{1,1}_{\rm per}\times W^{1,1}_{\rm per}\times L^1_{\rm per}\times L^1_{\rm per}$, where \begin{subequations}
\begin{align}
W^{1,1}_{\rm per}&=\{f\in W^{1,1}_{\rm loc}(\mathbb{R})\mid f(\xi+1)=f(\xi) \text{ for all }\xi\in\mathbb{R}\},\\
L^1_{\rm per}&=\{f\in L^1_{\rm loc}(\mathbb{R})\mid f(\xi+1)=f(\xi) \text{ for all }\xi\in\mathbb{R}\},
\end{align} \end{subequations} and the corresponding norms are given by \begin{equation*}
\begin{aligned}
\norm{f}_{W^{1,1}_{\rm per}}&=\norm{f}_{L^\infty([0,1])}+\norm{f_\xi}_{L^1([0,1])},\text{ and }\norm{f}_{L^1_{\rm per}}=\norm{f}_{L^1([0,1])},\\
\norm{(y,U,\nu,r)}_{E}&=\norm{y-\id}_{W^{1,1}_{\rm per}}+\norm{U}_{W^{1,1}_{\rm per}}+\norm{\nu}_{L^1_{\rm per}}+\norm{r}_{L^1_{\rm per}}.
\end{aligned} \end{equation*} The existence and uniqueness of short time solutions of \eqref{sys:persys}, will follow from a contraction argument once we can show that the right-hand side of \eqref{sys:persys} is Lipschitz continuous on bounded sets. Note that this is the case if and only if the same holds for $P$ and $Q$. The latter statement has been proved in \cite[Lemma 2.1]{GHR}, and we state the result here for completeness.
\begin{lemma}
\label{lem:PQ}
For any $X=(y,U,\nu,r)$ in $E$, we define the maps
$\mathcal{Q}$ and $\mathcal{P}$ as $\mathcal{Q}(X)=Q$ and $\mathcal{P}(X)=P$ where
$P$ and $Q$ are given by \eqref{eq:Psimp1} and
\eqref{eq:Qsimp1}, respectively. Then, $\mathcal{P}$ and $\mathcal{Q}$
are Lipschitz maps on bounded sets from $E$ to
${W^{1,1}_{\text{\rm per}}}$. More precisely, we have the following
bounds. Let
\begin{equation}
\label{eq:defBM}
B_M=\{X=(y,U,\nu,r)\in E\mid
\norm{U}_{{W^{1,1}_{\text{\rm per}}}}+\norm{y_\xi}_{L^1_{per}}+\norm{\nu}_{L^1_{per}}\leq
M\}.
\end{equation}
Then for any $X,\tilde X\in B_M$, we have
\begin{equation}
\label{eq:lipbdQ}
\norm{\mathcal{Q}(X)-\mathcal{Q}(\tilde X)}_{{W^{1,1}_{\text{\rm per}}}}\leq C_M\norm{X-\tilde X}_{E},
\end{equation}
and
\begin{equation}
\label{eq:lipbdP}
\norm{\mathcal{P}(X)-\mathcal{P}(\tilde X)}_{{W^{1,1}_{\text{\rm per}}}}\leq C_M\norm{X-\tilde X}_{E},
\end{equation}
where the constant $C_M$ only depends on the value
of $M$. \end{lemma}
To establish the global existence of solutions, we have to impose more conditions on our initial data and solutions in Lagrangian coordinates. \begin{definition}
\label{def:F}
The set $\ensuremath{\mathcal{F}}$ is composed of all $(y,U,\nu,r)\in E$
such that
\begin{subequations}
\label{eq:lagcoord}
\begin{align}
\label{eq:lagcoord1}
&\ (y,U)\in
W^{1,\infty}_{\rm loc}(\mathbb{R})\times W^{1,\infty}_{\rm loc}(\mathbb{R}) ,\ (\nu,r)\in L^\infty(\mathbb{R})\times L^\infty(\mathbb{R}),\\
\label{eq:lagcoord2}
&y_\xi\geq0,\ \nu\geq0,\ y_\xi+\nu\geq c\text{
almost everywhere, for some constant $c>0$},\\
\label{eq:lagcoord3}
&y_\xi \nu=y_\xi^2U^2+U_\xi^2+r^2\text{ almost everywhere}.
\end{align}
\end{subequations} \end{definition}
The set $\ensuremath{\mathcal{F}}$ is preserved with respect to time and plays a special role when proving the global existence of solutions. In particular, for $X(t)\in\ensuremath{\mathcal{F}}$, we have $h(t)=h(0)$ for all $t\in\mathbb{R}$ which implies that $\norm{X(t)}_E$ cannot blow up within a finite time interval. Note that the first three equations in \eqref{sys:persys} are independent of $r$ and coincide with the system considered in \cite{GHR}. Moreover, the last variable $r$ is preserved with respect to time. Hence, by following closely the proofs of \cite[Lemma 2.3, Theorem 2.4]{GHR}, we get the global existence of solutions. \begin{theorem}
\label{th:global}
For any $\bar X=(\bar y,\bar U,\bar \nu,\bar r)\in\ensuremath{\mathcal{F}}$,
the system \eqref{sys:persys} admits a unique global
solution $X(t)=(y(t),U(t),\nu(t),r(t))$ in
$C^1(\mathbb{R}_+,E)$ with initial data $\bar X=(\bar
y,\bar U,\bar\nu,\bar r)$. We have $X(t)\in\ensuremath{\mathcal{F}}$ for all
times. Let the mapping
$S\colon\ensuremath{\mathcal{F}}\times\mathbb{R}_+\to\ensuremath{\mathcal{F}}$ be defined as
\begin{equation*}
S_t(X)=X(t).
\end{equation*}
Given $M>0$ and $T>0$, we define $B_M$ as before,
that is,
\begin{equation}
\label{eq:defBM2}
B_M=\{X=(y,U,\nu,r)\in E\mid
\norm{U}_{{W^{1,1}_{\text{\rm per}}}}+\norm{y_\xi}_{L^1_{per}}+\norm{\nu}_{L^1_{per}}\leq
M\}.
\end{equation}
Then there exists a constant $C_M$ which depends
only on $M$ and $T$ such that, for any two
elements $X_\alpha$ and $X_\beta$ in $B_M$, we
have
\begin{equation}
\label{eq:stabSt}
\norm{S_tX_\alpha-S_tX_\beta}_E\leq C_M\norm{X_\alpha-X_\beta}_E
\end{equation}
for any $t\in[0,T]$. \end{theorem}
So far we have proved that there exist global, unique solutions to the 2CH system in Lagrangian coordinates. However, we still have to show that the assumptions are sufficiently general to accommodate rather general initial data in Eulerian coordinates. In particular, we must admit initial data (in Eulerian coordinates) that consists not only of the functions $u_0$ and $\rho_0$ but also of a positive, periodic Radon measure. This is necessary due to the fact that when wave breaking occurs, energy is concentrated at sets of measure zero. More precisely, we define the set of Eulerian coordinates as follows.
\begin{definition} \label{def:D}
The set $\ensuremath{\mathcal{D}}$ of possible initial data consists of all triplets $(u,\rho,\mu)$ such that $u\in H^1_{\rm per}$, $\rho\in L^2_{\rm per}$, and $\mu$ is a positive, periodic Radon measure whose absolute continuous part, $\mu_{\rm ac}$, satisfies
\begin{equation}
\mu_{\rm ac}=(u^2+u_x^2+\rho^2)dx.
\end{equation} \end{definition}
Having identified our set of Eulerian coordinates we can map them to the corresponding set of Lagrangian coordinates, using the mapping $\tilde L$.
\begin{definition}\label{defL}
For any $(u,\rho,\mu)$ in $\ensuremath{\mathcal{D}}$, let
\begin{equation}\label{def:EtoF}
\begin{aligned}
h& = \mu([0,1)), \\
y(\xi)& = \sup \{y \mid F_\mu(y)+y<(1+h)\xi\},\\
\nu(\xi)& = (1+h)-y_\xi(\xi),\\
U(\xi)& =u\circ y(\xi), \\
r(\xi)& =\rho\circ y(\xi)y_\xi(\xi)
\end{aligned}
\end{equation}
where
\begin{equation}
F_\mu(x)=\left\{
\begin{aligned}
\mu([\,0,x)) \quad & \text{ if } x>0,\\
0 \quad & \text{ if }x=0,\\
-\mu([\,x,0)) \quad & \text{ if } x<0.
\end{aligned}
\right .
\end{equation}
Then $(y,U,\nu,r)\in \ensuremath{\mathcal{F}}$. We define
$\tilde L(u,\mu)=(y,U,\nu,r)$. The functions $P$ and $Q$ are given by \eqref{eq:Psimp1} and \eqref{eq:Qsimp1}, respectively. \end{definition}
That this definition is well-posed follows after some slight modifications as in \cite{GHR}.
However, notice that we have three Eulerian coordinates in contrast to four Lagrangian coordinates, and hence there can at best be a one-to-one correspondence between triplets in Eulerian coordinates and \textit{equivalence classes} in Lagrangian coordinates. When defining equivalence classes, relabeling functions will play a key role, and we will see why we had to impose \eqref{eq:lagcoord2} in the definition of $\ensuremath{\mathcal{F}}$. Therefore we will now focus on the set $G$ of relabeling functions.
\begin{definition}\label{def:group}
\label{def:G} Let $G$ be the set of all
functions $f$ such that $f$ is invertible,
\begin{align}
\label{eq:Gcond1}
&f\in W_{\rm loc}^{1,\infty}(\mathbb{R}),\
f(\xi+1)=f(\xi)+1 \text{ for all }\xi\in\mathbb{R}\text{, and }\\
\label{eq:Gcond2}
&f-\id\text{ and }f^{-1}-\id\text{ both belong to }{W^{1,\infty}_{\text{\rm per}}}.
\end{align} \end{definition} One of the main reasons for the choice of $G$ is that any $f\in G$ satisfies \begin{equation*}
\frac{1}{1+\alpha}\leq f_\xi\leq 1+\alpha, \end{equation*} for some constant $\alpha>0$ according to \cite[Lemma 3.2]{HR}. This allows us, following the same lines as in \cite[Definition 3.2, Proposition 3.3]{GHR} to define a group action of $G$ on $\ensuremath{\mathcal{F}}$. \begin{definition}
We define the map $\Phi\colonG\times\ensuremath{\mathcal{F}}\to\ensuremath{\mathcal{F}}$
as follows
\begin{equation*}
\left\{
\begin{aligned}
\bar y&=y\circ f,\\
\bar U&=U\circ f,\\
\bar\nu&=\nu\circ f f_\xi,\\
\bar r& =r\circ ff_\xi,
\end{aligned}
\right.
\end{equation*}
where $(\bar y,\bar U,\bar
\nu,\bar r)=\Phi(f,(y,U,\nu,r))$. We denote $(\bar y,\bar
U,\bar\nu,\bar r)=(y,U,\nu)\bullet f$. \end{definition}
Using $\Phi$, we can identify a subset of $\ensuremath{\mathcal{F}}$ which contains one element of each equivalence class. We introduce $\ensuremath{\mathcal{F}}_0\subset \ensuremath{\mathcal{F}}$, \begin{equation*}
\ensuremath{\mathcal{F}}_0=\{ X=(y,U,\nu,r)\in\ensuremath{\mathcal{F}}\mid y_\xi+\nu=1+h\}, \end{equation*} where $h=\norm{\nu}_{L^1_{per}}$. In addition, let $\mathcal{H}\subset \ensuremath{\mathcal{F}}_0$ be defined as follows \begin{equation*}
\mathcal{H}=\{(y,U,\nu,r)\in\ensuremath{\mathcal{F}}_0\mid \int_0^1 y(\xi)d\xi=0\}. \end{equation*} We can then associate to any element $X\in\ensuremath{\mathcal{F}}$ a unique element $\bar X\in \mathcal{H}$. This means there is a bijection between $\mathcal{H}$ and $\ensuremath{\mathcal{F}}/G$. Indeed, let $\Pi_1\colon\ensuremath{\mathcal{F}}\to\ensuremath{\mathcal{F}}_0$ be given by \begin{equation*}
\Pi_1(X)=X\bullet f^{-1}, \end{equation*} with $f(\xi)=\frac{1}{1+h}(y(\xi)+\int_0^\xi\nu(\eta)d\eta)\inG$ for $X=(y,U,\nu,r)\in \ensuremath{\mathcal{F}}$ due to \eqref{eq:lagcoord2} and $\Pi_2\colon\ensuremath{\mathcal{F}}_0\to\mathcal{H}$ by \begin{equation*}
\Pi_2(x)=X(\xi-a), \end{equation*} where $a=\int_0^1 y(\xi)d\xi$ for $X=(y,U,\nu,r)$. Then \begin{equation*}
\Pi=\Pi_2\circ \Pi_1 \end{equation*} is a projection from $\ensuremath{\mathcal{F}}$ to $\mathcal{H}$, and, since $\Pi(X)$ is unique, $\ensuremath{\mathcal{F}}/G$ and $\mathcal{H}$ are in bijection. We can now redefine our mapping from Eulerian to Lagrangian coordinates such that any triplet $(u,\rho,\mu)\in \ensuremath{\mathcal{D}}$ is mapped to the corresponding element $(y,U,\nu,r)\in \mathcal{H}$ by applying $\tilde L$ followed by $\Pi$. \begin{theorem}
For any $(u,\rho,\mu)\in \ensuremath{\mathcal{D}}$ let $X=(y,U,\nu,r)\in \mathcal{H}$ be given by $X=L(u,\rho,\mu)=\Pi\circ \tilde L(u,\rho,\mu)$. Then $L\colon\ensuremath{\mathcal{D}}\to \mathcal{H}$. \end{theorem}
Furthermore, since $r_t=0$, \cite[Lemma 3.5]{GHR} implies directly that $S_t$ is equivariant, i.e., \begin{equation}
S_t(X\bullet f)=S_t(X)\bullet f, \end{equation} for $X\in \ensuremath{\mathcal{F}}$ and $f\inG$. In particular, we can define the semigroup $\bar S_t$ on $\mathcal{H}$ as \begin{equation}\label{timeevol}
\bar S_t=\Pi\circ S_t. \end{equation}
The final and last step is to go back from Lagrangian to Eulerian coordinates, which is a generalization of \cite[Theorem 3.11]{HR} and the adaptation of \cite[Theorem 4.10]{GHR} to the periodic case. \begin{theorem}\label{def:FtoE}
Let $X\in\ensuremath{\mathcal{F}}$, then the periodic measure\footnote{The push-forward of a measure $\nu$ by a measurable function $f$ is the measure $f_\#\nu$ defined as $f_\#\nu(B)=\nu(f^{-1}(B))$ for any Borel set $B$.} $y_\#(r d\xi)$ is absolutely continuous and $(u,\rho,\mu)$ given by
\begin{equation}
\label{eq:defL}
\begin{aligned}
u(x)=U(\xi) & \text{ for any } \xi \text{ such that } x=y(\xi),\\
\mu& =y_\# (\nu d\xi),\\
\rho(x) dx& =y_\# (r d\xi),
\end{aligned}
\end{equation}
belongs to $\ensuremath{\mathcal{D}}$. We denote by $M$ the mapping
from $\ensuremath{\mathcal{F}}$ to $\ensuremath{\mathcal{D}}$ which to any $X\in\ensuremath{\mathcal{F}}$
associates the element $(u,\rho,\mu)\in\ensuremath{\mathcal{D}}$ given by
\eqref{eq:defL}. \end{theorem} Note that the mapping $M$ is independent of the representative in every equivalence class we choose, i.e., \begin{equation}
\label{eq:propM}
M=M\circ \Pi. \end{equation}
In order to be able to get back and forth between Eulerian and Lagranian coordinates at any possible time it is left to clarify the relation between $L$ and $M$.
\begin{theorem}\label{thm:LMPI}
The maps $L:\ensuremath{\mathcal{D}}\to \mathcal{H}$ and $M:\mathcal{H}\to\ensuremath{\mathcal{D}}$ are invertible. We have
\begin{equation}
\label{eq:inversma}
L\circ M=\Pi, \quad \text{ and } \quad M\circ L=\id,
\end{equation}
where the mapping $\Pi: \ensuremath{\mathcal{F}}\to \mathcal{H}$ is a projection which associates to any element $X\in \ensuremath{\mathcal{F}}$ a unique element $\tilde X\in \mathcal{H}$, which means, in particular, that $\ensuremath{\mathcal{F}}/G$ and $\mathcal{H}$ are in bijection. \end{theorem}
The proof follows the same lines as \cite[Theorem 3.12]{HR}, and we therefore do not present it here. We will see later that the last theorem together with \eqref{timeevol} allows us to define a semigroup of solutions. To obtain a continuous semigroup we have to study the stability of solutions in Lagrangian coordinates, which is the aim of the next section.
\section{Lipschitz metric}
We will now construct a Lipschitz metric in Lagrangian coordinates which will be invariant under relabeling. It will be quite similar to the one in \cite{GHR} due to the fact that the first three equations in \eqref{sys:persys} are independent of $r$ and coincide with the system considered in \cite{GHR} and because $r(t)=r(0)$ for all $t\in\mathbb{R}$.
Let $X_\alpha$, $X_\beta\in\ensuremath{\mathcal{F}}$. We introduce the function $J(X_\alpha, X_\beta)$ by \begin{equation}
\label{eq:defJ}
J(X_\alpha,X_\beta)=\inf_{f,g\inG}\norm{X_\alpha\bullet f-X_\beta\bullet g}_E, \end{equation} which is invariant with respect to relabeling. That means, for any $X_\alpha,X_\beta\in\ensuremath{\mathcal{F}}$ and $f,g\inG$, we have \begin{equation}
\label{eq:invralJ}
J(X_\alpha\bullet f,X_\beta\bullet g)=J(X_\alpha,X_\beta). \end{equation}
Note that the mapping $J$ does not define a metric, since it does not satisfy the triangle inequality, which is the reason why we introduce the following mapping $d$.
Let $d(X_\alpha,X_\beta)$ be defined by \begin{equation}
\label{eq:defdist}
d(X_\alpha,X_\beta)=\inf \sum_{n=1}^NJ(X_{n-1},X_n), \quad X_\alpha,X_\beta\in\ensuremath{\mathcal{F}}, \end{equation} where the infimum is taken over all finite sequences $\{X_n\}_{n=0}^N\in\ensuremath{\mathcal{F}}$ satisfying $X_0=X_\alpha$ and $X_N=X_\beta$. In particular, $d$ is relabeling invariant, that means for any $X_\alpha,X_\beta\in\ensuremath{\mathcal{F}}$ and $f,g\inG$, we have \begin{equation}
\label{eq:invralJ_d}
d(X_\alpha\bullet f,X_\beta\bullet g)=d(X_\alpha,X_\beta). \end{equation}
In order to prove that $d$ is a Lipschitz metric on bounded sets, we have to choose one element in each equivalence class, and we will apply \eqref{eq:stabSt}. One problem we are facing in that context is that the constant on the right-hand side of \eqref{eq:stabSt} depends on the set $B_M$ we choose, but $B_M$ is not preserved by the time evolution while it is invariant with respect to relabeling. Hence we will try to find a suitable set, which is invariant with respect to time and relabeling and is in some sense equivalent to $B_M$. To that end we define the subsets of bounded energy $\ensuremath{\mathcal{F}}^M$ of $\ensuremath{\mathcal{F}}_0$ by \begin{equation*}
\ensuremath{\mathcal{F}}^M=\{X=(y,U,\nu,r)\in \ensuremath{\mathcal{F}}\mid h=\norm{\nu}_{L^1_{per}}\leq M\} \end{equation*} and let $\mathcal{H}^M=\mathcal{H}\cap\ensuremath{\mathcal{F}}^M$. The important property of the set $\ensuremath{\mathcal{F}}^M$ is that it is preserved both by the flow and relabeling. In particular, we have that \begin{equation}
\label{eq:incFMBM}
B_{M}\cap\mathcal{H}\subset \mathcal{H}^M \subset B_{\bar M}\cap\mathcal{H} \end{equation} for $\bar M=6(1+M)$ and hence the sets $B_M\cap\mathcal{H}$ and $\mathcal{H}^M$ are in this sense equivalent.
\begin{definition}
Let $d_M$ be the metric on $\mathcal{H}^M$ which is
defined, for any $X_\alpha,X_\beta\in\mathcal{H}^M$, as
\begin{equation}
\label{eq:defdM}
d_M(X_\alpha,X_\beta)=\inf \sum_{n=1}^NJ(X_{n-1},X_n)
\end{equation}
where the infimum is taken over all finite
sequences $\{X_n\}_{n=0}^N\in\mathcal{H}^M$ which
satisfy $X_0=X_\alpha$ and $X_N=X_\beta$. \end{definition}
By definition $d_M$ is relabeling invariant and the triangle inequality is satisfied. In this way we obtain a metric which in addition can be compared with other norms on $\mathcal{H}^M$ (cf.~\cite[Lemma 4.3]{GHR}). \begin{lemma}
The mapping $d_M\colon\mathcal{H}^M\times \mathcal{H}^M\to \mathbb{R}_+$ is a metric on $\mathcal{H}^M$. Moreover,
given $X_\alpha$, $X_\beta\in \mathcal{H}^M$, define $R_\alpha=\int_0^1 r_\alpha(\eta)d\eta$ and $R_\beta=\int_0^1 r_\beta(\eta)d\eta$. Then we have
\begin{equation}
\label{eq:LinfbdJ}
\norm{y_\alpha-y_\beta}_{L^\infty}+\norm{U_\alpha-U_\beta}_{L^\infty}+\abs{h_\alpha-h_\beta}+\abs{R_\alpha-R_\beta}\leq C_M d_M(X_\alpha,X_\beta)
\end{equation}
and
\begin{equation}
\label{eq:dequiv}
d(X_\alpha,X_\beta)\leq\norm{X_\alpha-X_\beta}_E,
\end{equation}
where $C_M$ denotes some fixed constant which depends only on $M$. \end{lemma}
To show that we not only obtained a relabeling invariant metric but in fact a Lipschitz metric, we combine all results we obtained so far as in \cite[Theorem 4.6]{GHR}. This yields the following Lipschitz stability theorem for $\bar S_t$.
\begin{theorem}
\label{th:stab} Given $T>0$ and $M>0$, there
exists a constant $C_M$ which depends only on $M$
and $T$ such that, for any
$X_\alpha,X_\beta\in\mathcal{H}^M$ and $t\in[0,T]$, we
have
\begin{equation}
\label{eq:stab}
d_M(\bar S_tX_\alpha,\bar S_tX_\beta)\leq C_Md_M(X_\alpha,X_\beta).
\end{equation} \end{theorem}
\section{Global weak solutions}
It is left to check that we obtain a global weak solution of the 2CH system by solving \eqref{sys:persys} and using the maps between Eulerian and Lagrangian coordinates. In the case of conservative solutions we have that for any triplet $(u(t,x),\rho(t,x),\mu(t,x))$ in Eulerian coordinates, the function $P(t,x)$ is given by \begin{equation}\label{Peuler}
\begin{aligned}
P(t,x)&=\frac{1}{2(e-1)}\int_0^1 \cosh(x-z)u^2(t,z)dz+\frac{1}{2(e-1)}\int_0^1 \cosh(x-z)d\mu(t,z)\\
&\quad +\frac14 \int_0^1e^{-\vert x-z\vert}u^2(t,z)dz+\frac14 \int_0^1 e^{-\vert x-z\vert}d\mu(t,z).
\end{aligned} \end{equation} Applying the mapping $L$ maps $P(t,x)$ to $P(t,\xi)$ given by \eqref{eq:Psimp1} and $P_x(t,x)$ to $Q(t,\xi)$ given by \eqref{eq:Qsimp1}. Since the set of times where wave breaking occurs has measure zero, $\mu=\mu_{ac}=(u^2+u_x^2+\rho^2)dx$ for almost all times, $P(t,x)$ defined by \eqref{Peuler} coincides for almost all times and all $x\in\mathbb{R}$ with the solution of $P-P_{xx}=u^2+\frac12 u_x^2+\frac12 \rho^2$.
\begin{definition}\label{weaksol}
Let $u\colon\mathbb{R}_+\times\mathbb{R} \rightarrow
\mathbb{R}$ and
$\rho\colon\mathbb{R}_+\times\mathbb{R}\rightarrow\mathbb{R}$.
Assume that $u$ and $\rho$
satisfy \\
(i) $u\in L^\infty([0,\infty), H^1_{\rm per})$, $\rho\in L^\infty([0,\infty), L^2_{\rm per})$, \\
(ii) the equations
\begin{multline}\label{weak1}
\iint_{\mathbb{R}_+\times [0,1]}\Big(-u(t,x)\phi_t(t,x)+(u(t,x)u_x(t,x)+P_x(t,x))\phi(t,x)\Big)dxdt \\
= \int_{[0,1]} u(0,x)\phi(0,x)dx,
\end{multline}
\begin{equation}\label{weak2}
\iint_{\mathbb{R}_+\times [0,1]}\Big((P(t,x)-u^2(t,x)-\frac{1}{2} u_x^2(t,x))\phi(t,x)+P_x(t,x)\phi_x(t,x)\Big)dxdt=0,
\end{equation}
and \begin{equation}\label{weak3}
\iint_{\mathbb{R}_+\times [0,1]}\Big(-\rho(t,x)\phi_t(t,x)-u(t,x)\rho(t,x)\phi_x(t,x)\Big) dxdt=\int_{[0,1]} \rho(0,x)\phi(0,x)dx,
\end{equation}
hold for all spatial periodic functions $\phi\in
C_0^\infty ([0,\infty),\mathbb{R})$. Then we say
that $(u,\rho)$ is a global weak solution of the
two-component Camassa--Holm system.
If
$(u,\rho)$ in addition satisfies
\begin{equation*}
(u^2+u_x^2+\rho^2)_t+(u(u^2+u_x^2+\rho^2))_x-(u^3-2Pu)_x=0
\end{equation*}
in the sense that
\begin{align}
\iint_{\mathbb{R}_+\times [0,1]}&\Big[
(u^2(t,x)+u_x^2(t,x)+\rho^2(t,x))\phi_t(t,x)\notag\\
&+(u(t,x)(u^2(t,x)+u_x^2(t,x)+\rho^2(t,x)))\phi_x(t,x)\label{eq:weak4}\\
&\qquad\qquad-(u^3(t,x)-2P(t,x)u(t,x))\phi_x(t,x)\Big]dxdt \notag
=0,
\end{align}
for any spatial periodic function $\phi\in
C_0^{\infty}((0,\infty)\times\mathbb{R})$, we say
that $(u,\rho)$ is a weak global conservative
solution of the two-component Camassa--Holm
system. \end{definition}
Introduce the mapping $T_t$ from $\ensuremath{\mathcal{D}}$ to $\ensuremath{\mathcal{D}}$ by \begin{equation}
T_t=M\bar S_t L. \end{equation} Then one can check that for any $(u_0,\rho_0,\mu_0)\in \ensuremath{\mathcal{D}}$ such that $\mu_0$ is purely absolutely continuous, the pair $(u(t,x),\rho(t,x))$ given by $(u,\rho,\mu)(t)=T_t(u_0,\rho_0,\mu_0)$ satisfies \eqref{weak1}--\eqref{weak3}. \begin{theorem}\label{th:main2}
Given any initial condition $(u_0,\rho_0)\in H^1_{\rm per}\times L^2_{\rm per}$, we define $\mu_0=(u_0^2+u_{0,x}^2+\rho_0^2)dx$, and we denote $(u,\rho,\mu)(t)=T_t(u_0,\rho_0, (u_0^2+u_{0,x}^2+\rho_0^2)dx)$. Then $(u,\rho)$ is a periodic and global weak solution of the 2CH system and $\mu$ satisfies weakly $$\mu_t+(u\mu)_x=(u^3-2Pu)_x.$$ Moreover, $\mu(t)$ consists of an absolutely continuous and a singular part, that means
\begin{equation*}
\int_{[0,1]} d\mu(t,x)=\int_{[0,1]} (u^2+u_{x}^2+\rho^2)(t,x) \, dx+\int_{[0,1]} d\mu_{sing}(t,x).
\end{equation*}
In particular, $supp(\mu_{sing}(t,.))$ coincides with the set of points where wave breaking occurs at time $t$. \end{theorem}
Note that since we are looking for global weak solutions for initial data in $H^1_{\rm per}\times L^2_{\rm per}$ it is no restriction to assume that $\mu$ is purely absolutely continuous initially while we in general will not have that $\mu$ remains purely absolutely continuous at any later time. In particular, if $\mu$ is not absolutely continuous at a particular time, we know how much and where the energy has concentrated, and this energy must be given back to the solution in order to obtain conservative solutions. Therefore the measure plays an important role. \\ It is also possible to define global weak solutions for initial data where the measure $\mu_0$ is not purely absolutely continuous by defining $P(0,x)$ using \eqref{Peuler}, which is then mapped to \eqref{eq:Psimp1} by applying \eqref{def:EtoF} directly. Moreover, $P(0,\xi)$ can be mapped back to $P(0,x)$ via $M$. In addition, this point of view allows us to jump between Eulerian and Lagrangian coordinates at any time. \\ Moreover, one can show that the sets $\ensuremath{\mathcal{F}}^M$ and $\mathcal{H}^M$ in Lagrangian coordinates correspond to the set $\ensuremath{\mathcal{D}}^M$ in Eulerian coordinates. Given $M>0$, we define \begin{equation}
\ensuremath{\mathcal{D}}^M=\{(u,\rho,\mu)\in\ensuremath{\mathcal{D}}\mid \mu([0,1))\leq M\}. \end{equation} Thus it is natural to define a Lipschitz metric on the sets of bounded energy in Eulerian coordinates as follows, \begin{equation}
d_{\ensuremath{\mathcal{D}}^M}((u,\rho,\mu), (\tilde u,\tilde \rho,\tilde\mu))=d_M(L(u,\rho,\mu), L(\tilde u,\tilde\rho,\tilde\mu)). \end{equation} In particular, we have the following result.
\begin{theorem}\label{th:main}
The semigroup $(T_t,d_\ensuremath{\mathcal{D}})$, which corresponds to solutions of the 2CH system, is a continuous
semigroup on $\ensuremath{\mathcal{D}}$ with respect to the metric
$d_\ensuremath{\mathcal{D}}$. The semigroup is Lipschitz continuous on
sets of bounded energy, that is, given $M>0$ and
a time interval $[0,T]$, there exists a constant
$C$ which only depends on $M$ and $T$ such that,
for any $(u,\rho,\mu)$ and $(\tilde u,\tilde \rho,\tilde\mu)$ in
$\ensuremath{\mathcal{D}}^M$, we have
\begin{equation*}
d_{\ensuremath{\mathcal{D}}^M}(T_t(u,\rho,\mu),T_t(\tilde{u},\tilde\rho,\tilde{\mu}))\leq Cd_{\ensuremath{\mathcal{D}}^M}((u,\rho,\mu),(\tilde{u},\tilde\rho,\tilde{\mu}))
\end{equation*}
for all $t\in[0,T]$. \end{theorem}
Last, but not least, we want to investigate the regularity of solutions and the connection of the topology in $\ensuremath{\mathcal{D}}$ with other topologies. Due to the global interaction term given for almost all times by \begin{equation}\label{eq:nonlocal}
\begin{aligned}
P(t,x)& =\frac1{2(e-1)} \int_0^1 \cosh(x-z)(2u^2+u_x^2+\rho^2)(t,z)dz\\
&\quad + \frac14 \int_0^1 e^{-\vert x-z\vert}(2u^2+u_x^2+\rho^2)(t,z)dz,
\end{aligned} \end{equation} the 2CH system has an infinite speed of propagation \cite{henry:09}. However, the system remains essentially hyperbolic in nature, and we prove that singularities travel with finite speed. In \cite[Theorem 6.1]{GHR2} we showed that the local regularity of a solution depends on the regularity of the initial data and that $\rho_0(x)^2$ can be bounded from below by a strictly positive constant. Since this result is a local result, it carries over to the periodic case and we state it here for the sake of completeness.
\begin{theorem}\label{th:presreg}
We consider initial data $(u_0,\rho_0,\mu_0)\in\ensuremath{\mathcal{D}}$. Furthermore, we assume that there exists an interval $(x_0,x_1)$ such that $(u_0,\rho_0,\mu_0)$ is $p$-regular, with $p\geq 1$, in the sense that
\begin{equation*}
u_0\in W^{p,\infty}(x_0,x_1), \quad \rho_0\in W^{p-1,\infty}(x_0,x_1), \quad \text{and } \mu_0=\mu_{0,ac} \text{ on } (x_0,x_1),
\end{equation*}
and that
\begin{equation*}
\rho_0(x)^2\geq c>0
\end{equation*}
for $x\in(x_0,x_1)$. Then for any $t\in\mathbb{R}_+$, $(u,\rho,\mu)(t,\,\cdot\,)$ is $p$-regular on the interval $(y(t,\xi_0),y(t,\xi_1))$, where $\xi_0$ and $\xi_1$ satisfy $y(0,\xi_0)=x_0$ and $y(0,\xi_1)=x_1$ and are defined as
\begin{equation*}
\text{$\xi_0=\sup\{\xi\in\mathbb{R}\ |\ y(0,\xi)\leq x_0\}$ and
$\xi_1=\inf\{\xi\in\mathbb{R}\ |\ y(0,\xi)\geq x_1\}$.}
\end{equation*} \end{theorem}
In other words, we see that the regularity is preserved between characteristics. As an immediate consequence we obtain the following result.
\begin{theorem}\label{cor:presreg3}
If the initial data $(u_0,\rho_0,\mu_0)\in\ensuremath{\mathcal{D}}$
satisfies $u_0,\rho_0\in C^{\infty}(\mathbb{R})$,
$\mu_0$ is absolutely continuous and
$\rho_0^2(x)\geq d>0$ for all $x\in\mathbb{R}$, then
$u,\rho\in C^\infty(\mathbb{R}\times\mathbb{R})$
is the unique classical solution to
\eqref{eq:chkappa} with
$\kappa=0$ and $\eta=1$. \end{theorem}
In particular this result implies that if $\rho_0^2(x)\geq c$ for some positive constant $c>0$, no wave breaking occurs. Hence if we can compare the topology on $\ensuremath{\mathcal{D}}$ with standard topologies we have a chance to approximate conservative solutions of the CH equation which enjoy wave breaking by global smooth solutions of the 2CH system. Indeed, the mapping \begin{equation}
(u,\rho)\mapsto (u,\rho, (u^2+u_x^2+\rho^2)dx), \end{equation} is continuous from $H^1_{\rm per}\times L^2_{\rm per}$ to $\ensuremath{\mathcal{D}}$. This means, given a sequence $(u_n,\rho_n)\in H^1_{\rm per}\times L^2_{\rm per}$ converging to $(u,\rho)\in H^1_{\rm per}\times L^2_{\rm per}$, then $(u_n, \rho_n, (u_n^2+u_{n,x}^2+\rho_n^2)dx)$ converges to $(u,\rho, (u^2+u_x^2+\rho^2)dx)$ in $\ensuremath{\mathcal{D}}$. \\ Conversely if $(u_n,\rho_n,\mu_n)$ is a sequence in $\ensuremath{\mathcal{D}}$ which converges to $(u,\rho,\mu)\in\ensuremath{\mathcal{D}}$, then \begin{equation}
u_n\rightarrow u \text{ in } L^\infty_{\rm per},\quad \rho_n\overset{\ast}{\rightharpoonup}\rho, \text{ and } \mu_n \overset{\ast}{\rightharpoonup}\mu. \end{equation}
Putting now everything together we have the following result. \begin{theorem}
\label{th:approxCH}
Let $u_0\in H^1_{\rm per}$. We consider the
approximating sequence of initial data
$(u_0^n,\rho_0^n,\mu_{0}^n)\in\ensuremath{\mathcal{D}}$ given by
$u_0^n\in C^\infty(\mathbb{R})$ with
$\lim_{n\to\infty}u_0^n=u_0$ in
$H^1_{\rm per}$, $\rho_0^n\in C^\infty(\mathbb{R})$
with $\lim_{n\to\infty}\rho_0^n=0$ in
$L^2_{\rm per}$, $(\rho_0^n)^2\geq d_n$ for
some constant $d_n>0$ and for all $n$ and
$\mu_{0}^n=((u_{0,x}^{n})^2+(\rho_0^n)^2)\,dx$. We
denote by $(u^n,\rho^n)$ the unique classical
solution to \eqref{eq:chkappa}, with $\kappa=0$ and $\eta=1$, in
$C^\infty(\mathbb{R}_+\times\mathbb{R})\times
C^\infty(\mathbb{R}_+\times\mathbb{R})$ with $(u,\rho)|_{t=0}=(u_0,\rho_0)$. Then for every
$t\in\mathbb{R}_+$, the sequence $u^n(t,\,\cdot\,)$
converges to $u(t,\,\cdot\,)$ in
$L^{\infty}(\mathbb{R})$, where $u$ is the
conservative solution of the Camassa--Holm
equation
\begin{equation}
u_t-u_{txx}+3uu_x-2u_xu_{xx}-uu_{xxx}=0,
\end{equation}
with initial data $u_0\in
H^1_{\rm per}$. \end{theorem}
\end{document} | arXiv |
\begin{document}
\begin{abstract} We address the enumeration of $p$-valent planar maps equipped with a
spanning forest, with a weight $z$ per face and a weight $u$ per connected
component of the forest. Equivalently, we count $p$-valent maps
equipped with a spanning \emm tree,, with a weight $z$ per face and a
weight $\mu:=u+1$ per \emm internally active, edge, in the sense of
Tutte; or the (dual) $p$-angulations equipped with a \emm recurrent
sandpile configuration,, with a weight $z$ per vertex and a
variable $\mu:=u+1$ that keeps track of the \emm level, of the configuration.
This enumeration problem also corresponds to
the limit $q\rightarrow 0$ of the $q$-state Potts model on
$p$-angulations.
Our approach is purely combinatorial. The associated generating function, denoted $F(z,u)$, is expressed in terms of a pair of series defined implicitly by a system involving doubly hypergeometric series. We derive from this system that $F(z,u)$ is \emm differentially algebraic, in $z$, that is, satisfies a differential equation in $z$ with polynomial coefficients in $z$ and $u$. This has recently been proved to hold for the more general Potts model on 3-valent maps, but via a much more involved and less combinatorial proof.
For $u\ge -1$, we study the singularities of $F(z,u)$ and the corresponding asymptotic behaviour of its $n$th coefficient. For $u>0$, we find the standard asymptotic behaviour of planar maps, with a subexponential term in $n^{-5/2}$. At $u=0$ we witness a phase transition with a term $n^{-3}$. When $u\in[-1,0)$, we obtain an
extremely unusual behaviour in $n^{-3}(\ln n)^{-2}$. To our
knowledge, this is a new ``universality class'' for planar maps.
\end{abstract} \title{Spanning forests in regular planar maps}
\section{Introduction}
A \emm planar map, is a proper embedding of a connected graph in the sphere. The enumeration of planar maps has received a continuous attention in the past 60 years, first in combinatorics with the pionneering work of Tutte~\cite{tutte-census-maps}, then in theoretical physics~\cite{BIPZ}, where maps are considered as random surfaces modelling the effect of \emm quantum gravity,, and more recently in probability theory~\cite{le-gall-miermont,marckert-miermont}.
General planar maps have been studied, as well as sub-families obtained by imposing constraints of higher connectivity, or prescribing the degrees of vertices or faces ({\it e.g.}, triangulations). Precise definitions are given below.
Several robust enumeration methods have been designed, from Tutte's recursive approach (e.g.~\cite{tutte-triangulations}), which leads to functional equations for the generating functions\ of maps, to the beautiful bijections initiated by Schaeffer~\cite{Sch97}, and further developed by physicists and combinatorics alike
~\cite{bernardi-fusy,BDG-blocked}, via the powerful approach based on matrix integrals~\cite{DFGZJ}. See for instance~\cite{mbm-survey} for a more complete (though non-exhaustive) bibliography.
Beyond the enumerative and asymptotic properties of planar maps, which are now well understood, the attention has also focussed on two more general questions: maps on higher genus surfaces~\cite{bender-surface-I,chapuy-marcus-schaeffer},
and maps equipped with an additional structure. The latter question is particularly relevant in physics, where a surface on which nothing happens (``pure gravity'') is of little interest. For instance, one has studied maps equipped with a polymer~\cite{DK88}, with an Ising model~\cite{BDG-blocked,Ka86,BK87,mbm-schaeffer-ising} or more generally a Potts model, with a proper colouring~\cite{lambda12,tutte-differential}, with loops models~\cite{borot1,borot2}, with a spanning tree~\cite{mullin-boisees}, or percolation on planar maps~\cite{angel-perco,bernardi-perco}.
In particular, several papers have been devoted in the past 20 years to the study of the Potts model on families of planar maps~\cite{baxter-dichromatic,borot3,daul,eynard-bonnet-potts,guionnet-jones,zinn-justin-dilute-potts}. In combinatorial terms, this means counting maps equipped with a colouring in $q$ colours, according to the size (\emm e.g.,, the number of edges) and the number of \emm monochromatic edges, (edges whose endpoints have the same colour). Up to a change of variables, this also means counting maps weighted by their \emm Tutte polynomial,, a bivariate combinatorial invariant which has numerous interesting specializations. By generalizing Tutte's formidable solution of properly coloured triangulations (1973-1982), it has recently been proved that the Potts generating function\ is \emm differentially algebraic,, that is, satisfies a (non-linear) differential equation\footnote{with respect
to the size variable} with polynomial coefficients~\cite{bernardi-mbm,bernardi-mbm-de,mbm-survey}. This holds at least for general planar maps and for triangulations (or dualy, for cubic maps).
The method that yields these differential equations is extremely involved, and does not shed much light on the structure of $q$-coloured maps. Moreover, one has not been able, so far, to derive from these equations the asymptotic behaviour of the number of coloured maps, nor the location of phase transitions.
The aim of this paper is to remedy these problems --- so far for a one-variable specialization of the Tutte polynomial. This specialization is obtained by setting to $1$ one of the variables, or by taking (in an adequate way) the limit $q\rightarrow 0$ in the Potts model. Combinatorially, we are simply counting maps (in this paper, $p$-valent maps) equipped with a spanning forest. We call them \emm forested maps,. This problem has already been studied in~\cite{sportiello} via a random matrix approach, but with no explicit solution. The generating function\ $F(z,u)$ that we obtain keeps track of the \emm size, of the map (the number of faces; variable $z$) and of the number of trees in the forest (minus one; variable $u$). The specialization $u=0$ thus counts maps equipped with a spanning \emm tree, and was determined a long time ago by Mullin~\cite{mullin-boisees}.
Here is an outline of the paper. We begin in Section~\ref{sec:prelim} with general definitions on maps, and on the Tutte polynomial. We recall some of its combinatorial descriptions, and underline in particular that the series $F(z, \mu-1)$, once expanded in powers of $z$ and $\mu$, has non-negative coefficients and admits several combinatorial interpretations. This important observation implies that the natural domain of the parameter $u$ is $[-1, +\infty)$ rather than $[0, +\infty)$.
In Section~\ref{sec:eq}, we obtain in a purely combinatorial manner an expression of $F(z,u)$ in terms of the solution of a system of two functional equations.
In Section~\ref{sec:de} we derive from this system that $F(z,u)$ is differentially algebraic in $z$, and give explicit differential equations for cubic ($p=3$) and 4-valent ($p=4$) maps. Section~\ref{sec:u+1} is a combinatorial interlude explaining why all series occurring in our
equations, like $F(z,u)$ itself, still have non-negative coefficients when $u \in [-1,0]$.
The rest of the paper is devoted to asymptotic results, still for $p=3$ and $p=4$:
when $u>0$, forested maps follow the standard asymptotic behaviour of planar maps ($ \mu^n n^{-5/2}$) but then there is a phase transition at $u=0$ (where one counts maps equipped with a spanning tree), and a very unusual asymptotic behaviour in $ \mu^n n^{-3}(\ln n)^{-2}$ holds when $u\in[-1,0)$. To our knowledge, this is
the first time a class of planar maps exhibits this asymptotic
behaviour. This proves in particular that $F(z,u)$ is not \emm D-finite,, that is, does not satisfy any \emm linear, differential equation in $z$ for these values of $u$ (nor for a generic value of $u$). This is in contrast with the case $u=0$, for which the generating function\ of maps equipped with a spanning forest is known to be D-finite.
Our key tool is the \emm singularity analysis, of~\cite{flajolet-sedgewick}: its basic principle is to derive the asymptotic behaviour of the coefficients of a series $F(z)$ from the singular behaviour of $F$
near its \emm dominant singularities, (\emm i.e.,, singularities of minimal modulus). The first case we study (4-valent maps with $u>0$) is simple: first, one of the two series involved in our system vanishes; the remaining one, denoted $R$, satisfies an inversion equation $\Omega(R(z))=z$ for which the (unique) dominant singularity $\rho$ of $R$ is such that $R(\rho)$ lies in the domain of analyticity of $\Omega$.
One obtains for $R$ a ``standard'' square root singularity. This is well understood and almost routine. Two ingredients make the other cases significantly harder: \begin{itemize} \item when $u<0$, $R(\rho)$ is a singularity of $\Omega$, \item when $p=3$ (cubic maps) we have to deal with a system of two
equations; the analysis of systems is
delicate, even in the so-called \emm positive case,, which corresponds in
our context to $u>0$ (see~\cite{drmota-systems,burris}). \end{itemize} These difficulties, which culminate when $p=3$ and $u<0$, are addressed in Sections~\ref{sec:general} and~\ref{sec:inversion}. Section~\ref{sec:general} establishes general results on
implicitly defined series. Section~\ref{sec:inversion} focusses on
the inversion equation $\Omega(R(z))=z$ in the case where (up to
translation) $\Omega$ has a $z \ln z$ singularity at $0$. One
then applies these results to the asymptotic analysis of forested maps in Sections~\ref{sec:asympt-4} (4-valent maps) and~\ref{sec:asympt-3} (cubic maps). Section~ \ref{sec:random} exploits the results of Section~\ref{sec:asympt-4} to study some properties of large \emm random, maps equipped with a spanning forest or a spanning tree.
We conclude in Section~\ref{sec:final} with a few comments.
\section{Preliminaries} \label{sec:prelim}
\subsection{Planar maps}
A \textit{planar map} is a proper embedding of a connected graph (possibly with loops and multiple edges) in the oriented sphere, considered up to continuous deformation. All maps in this paper are planar, and we often omit the term ``planar''. A \textit{face} is a (topological) connected component of the complement of the embedded graph. Each edge consists of two \textit{half-edges}, each incident to an endpoint of the edge. A \emph{corner} is an ordered pair $(e_1,e_2)$ of
half-edges incident to the same vertex, such that $e_2$ immediately
follows $e_1$ in counterclockwise order. The \emph{degree} of a vertex or a face is the number of corners incident to it. A vertex of degree $p$ is called \emm $p$-valent,. One-valent vertices are also called \emm leaves,. A map is \emph{$p$-valent} if all vertices are $p$-valent. A \emph{rooted map} is a map with a marked corner $(e_1,e_2)$, called the \emm root, and indicated by an arrow in our figures. The \emph{root vertex} is the vertex incident to the root. The \emph{root half-edge} is $e_2$ and the \emm root edge, is the edge supporting $e_2$. This way of rooting maps is equivalent to the more standard
way where one marks the root edge and orients it from $e_2$ to its other half-edge. All maps of the paper are rooted, and we often omit the term ``rooted''. The \emph{dual} of a map $M$, denoted $M^*$, is the map obtained by placing a vertex of $M^*$ in each face of $M$ and an edge of $M^*$ across each edge of $M$; see Figure~\ref{fig:dual}(a). The dual of a $p$-valent map is a map with all faces of degree $p$, also called \emm $p$-angulation,.
\begin{figure}
\caption{ (a) A rooted planar map and its dual (rooted at
the dual corner). (b) A 4-valent leaf-rooted tree.}
\label{fig:dual}
\end{figure}
A \emph{(plane) tree} is a planar map with a unique face. A tree is $p$-valent if all non-leaf vertices have degree $p$. We consider the edges leading to the leaves as half-edges, as suggested by Figure~\ref{fig:dual}(b). A \emph{leaf-rooted} tree (resp. \emph{corner-rooted}) is a tree with a marked leaf (resp. corner). The number of $p$-valent leaf-rooted (resp. corner-rooted) trees with $k$ leaves is denoted by $t_k$ (resp. $t^c_k$) (the notation should be $t_{k,p}$ and $t_{k,p}^c$, but we consider $p$ as a fixed integer, $p\ge 3$). These numbers are well-known~\cite[Thm.~5.3.10]{stanley-vol-2}: they are $0$ unless $k = (p-2)\ell+2$ with $\ell\ge 1$, and in this case, \begin{equation} \label{deftk}
t_k = \frac {((p-1)\ell)!}
{\ell!((p-2)\ell+1)!} \quad\quad \hbox{and}\quad\quad t^c_k = p \frac {((p-1)\ell)!}{(\ell-1)!((p-2)\ell+2)!}. \end{equation}
Let $M$ be a rooted planar map with vertex set $V$. A \emph{spanning forest} of $M$ is a graph $F=(V,E)$ where $E$ is a subset of edges of $M$ forming no cycle. Each connected component of $F$ is a tree, and the \textit{root component} is the tree containing the root vertex. We say that the pair $(M,F)$ is a \emph{forested map}. We denote by $F(z,u)$ the generating function of {$p$-valent forested maps}, counted by faces (variable $z$) and non-root components (variable $u$): \begin{equation}\label{F-def} F(z,u)=\sum_{M \ p-{\rm{\small valent}} \atop F\ {\rm{\small{spanning
\ forest}}}} z^{\ff(M)} u^{\cc(F)-1}, \end{equation} where $\ff(.)$ denotes the number of faces and $\cc(.)$ the number of components.
When $p=3$,
\begin{equation}\label{F-3} F(z,u) = \left( 6 + 4\,u \right) {z}^{3}+ \left( 140 + 234\,u +
144\,{u}^{2}+32\,{u}^{3} \right) {z}^{4} + O(z^5). \end{equation} The coefficient $(6+4u)$ means that there are 10 trivalent (or \emm cubic,) forested maps with 3 faces: 6 in which the forest is a tree, and 4 in which it has two components (Figure~\ref{fig:tenmaps}).
\begin{figure}
\caption{The 10 forested cubic maps with 3 faces.}
\label{fig:tenmaps}
\end{figure}
\subsection{Forest counting, the Tutte polynomial, and related models} \label{sec:tutte}
Let $G=(V,E)$ be a graph with vertex set $V$ and edge set $E$. The \emm Tutte polynomial, of $G$ is the following polynomial in two indeterminates (see \emm e.g., \cite{Bollobas:Tutte-poly}): \begin{equation}\label{Tutte-def} \Tpol_G(\mu,\nu):=\sum_{S\subseteq
E}(\mu-1)^{\cc(S)-\cc(G)}(\nu-1)^{\ee(S)+\cc(S)-\vv(G)}, \end{equation} where the sum is over all spanning subgraphs of $G$ (equivalently, over all subsets $S$ of edges) and $\vv(.)$, $\ee(.)$ and $\cc(.)$ denote respectively the number of vertices, edges and connected components.
The quantity $\ee(S)+\cc(S)-\vv(G)$ is the \emm cyclomatic number of $S$,, that is, the minimal number of edges one has to delete from $S$ to obtain a forest.
When $\nu=1$, the only subgraphs that contribute to~\eqref{Tutte-def} are the forests.
Hence the generating function\ of forested maps defined by~\eqref{F-def} can be written as \begin{equation}\label{F-Tutte} F(z,u)=\sum_{M \ p-\hbox{\small valent}} z^{\ff(M)} \Tpol_M(u+1,1). \end{equation} Note that we write $\Tpol_M$ although the value of the Tutte polynomial only depends on the underlying graph of $M$, not on the embedding.
Even though this is not clear from~\eqref{Tutte-def}, the polynomial $\Tpol_G(\mu,\nu)$ has non-negative coefficients in $\mu$ and $\nu$. This was proved combinatorially by Tutte~\cite{tutte-dichromate}, who showed that $\Tpol_G(\mu,\nu)$ counts spanning \emm trees, of $G$ according to two parameters, called \emm internal, and \emm external, activities. Other combinatorial descriptions of $\Tpol_G(\mu,\nu)$, in terms of other notions of activity, were given later. Let us present the one due to Bernardi, which is nicely related to maps~\cite{bernardi-tutte}. Following Mullin~\cite{mullin-boisees}, we call \emm tree-rooted map, a map $M$ equipped with a spanning tree $T$. Walking around $T$ in counter-clockwise order, starting from the root, defines a total order on the edges: the first edge that is met is the smallest one, and so on (Figure~\ref{fig:treetour}). An edge $e$ is \emm internally active, if it belongs to $T$ and is minimal in its cocycle; that is, all the edges $e'\not = e$ such that $\left(T\setminus\{e\}\right) \cup \{e'\}$ is a tree are larger than $e$. It is \emm externally active, if it does not belong to $T$ and is minimal in the cycle created by adding $e$ to $T$. Denoting by $\inte(M,T)$ and $\ext(M,T)$ the numbers of internally and externally edges, one has: $$ \Tpol_M(\mu,\nu)=\sum_{T\ \small{\rm{spanning\ tree}}} \mu^{\inte(M,T)} \nu^{\ext(M,T)}. $$ A non-obvious property of this description is that it only depends on the underlying graph of~$M$.
\begin{figure}
\caption{The edges of a tree-rooted map are naturally order by walking
around the tree. The active edges are those labelled 1, 3, 6 and 9.}
\label{fig:treetour}
\end{figure}
Returning to~\eqref{F-Tutte}, we thus obtain a second description of $F(z,u)$: \begin{equation}\label{F-act} F(z,u)=\sum_{M \ p-{\rm{\small valent}} \atop T\ \small{\rm{spanning\ tree}}} z^{\ff(M)}
(u+1)^{\inte(M,T)}. \end{equation} In particular, it makes sense combinatorially to write $u=\mu-1$ and take $u\in[-1,\infty)$.
We now give four more descriptions of $F(z,u)$ in terms of the dual
$p$-angulations.
For any planar map $M$, it is known that $$ \Tpol_{M^*}(\mu,\nu)=\Tpol_M(\nu,\mu). $$ Since $$ \Tpol_M(1,\nu)= \sum_{S \subset E, \, S \, \rm{connected}} (\nu-1)^{\ee(S)+\cc(S)-\vv(M)}, $$
we first derive from~\eqref{F-Tutte} that
\begin{eqnarray}
F(z,u)&= &\sum_{M \ p-\rm{\small{angulation}}} z^{\vv(M)}
\Tpol_M(1,u+1) \label{F-dual-Tutte} \\ &=& \sum_{M \ p-\rm{\small{angulation}} \atop
S\ \small{\rm{connected \ subgraph}}} z^{\vv(M)} u^{\ee(S)+\cc(S)-\vv(M)} \nonumber
\end{eqnarray} counts $p$-angulations $M$ equipped with a connected (spanning) subgraph $S$, by the vertex number of $M$ and the cyclomatic number of $S$. Also, the ``dual'' expression of~\eqref{F-act} reads \begin{equation}\label{F-dual} F(z,u)=\sum_{M \ p-{\rm{\small angulation}} \atop T\ \small{\rm{spanning\ tree}}} z^{\vv(M)} (u+1)^{\ext(M,T)}. \end{equation}
Our next interpretation of $F(z,u)$, which we will not entirely detail, relies on the connection between $\Tpol_M(1,\nu)$ and the \emm recurrent, (or: \emm critical,) configurations of the sandpile model on $M$. It is known~\cite{merino,cori-borgne} that $$ \Tpol_M(1,\nu) = \sum_{C \ \rm{\small recurrent}} \nu^{\ell(C)}, $$ where the sum runs over all recurrent configurations $C$, and $\ell (C)$ is the \emm level, of $C$. Hence \begin{equation}\label{F-sandpile}
F(z,u)= \sum_{M \ p-{\rm{\small{angulation}}} \atop C \ \rm{recurrent} } z^{\vv(M)}
(u+1)^{\ell(C)} \end{equation} also counts $p$-angulations $M$ equipped with a recurrent configuration $C$ of the sandpile model, by the vertex number of $M$ and the level of $C$.
Our final interpretation is in terms of the \emm Potts, model. Take $q\in \mathbb N$. A \emm $q$-colouring, of the vertices of $G=(V,E)$ is a map $c : V \rightarrow \{1, \ldots, q\}$. An edge of $G$ is \emm monochromatic, if its endpoints share the same colour. Every loop is thus monochromatic. The number of monochromatic edges is denoted by $m(c)$. The \emm partition function of the Potts model, on $G$ counts colourings by the number of monochromatic edges:
$$ \Ppol_G(q, \nu)= \sum_{c : V\rightarrow \{1, \ldots, q\}} \nu^{m(c)}. $$ The Potts model is a classical magnetism model in statistical physics, which includes (for $q=2$) the famous Ising model (with no magnetic field)~\cite{welsh-merino}. Of course, $\Ppol_G(q,0)$ is the chromatic polynomial of $G$. More generally, it is not hard to see that
$\Ppol_{G}(q,\nu)$ is always a \emm polynomial, in $q$ and $\nu$, and a multiple of $q$. Let us define the \emm reduced Potts polynomial, $\tilde \Ppol_G(q,\nu)$ by $$
\Ppol_G(q,\nu)= q\, \tilde \Ppol_G(q,\nu). $$ Fortuin and Kasteleyn established the
equivalence of $\tilde \Ppol_G$ with the Tutte polynomial~\cite{Fortuin:Tutte=Potts}: \begin{eqnarray*}
\tilde \Ppol_G(q,\nu)~=~\sum_{S\subseteq E(G)}q^{\cc(S)-1}(\nu-1)^{\ee(S)}~=~(\mu-1)^{\cc(G)-1}(\nu-1)^{\vv(G)-1}\,\Tpol_G(\mu,\nu), \end{eqnarray*} for $q=(\mu-1)(\nu-1)$. Setting $\mu=1$, we obtain, for a \emm connected, graph $G$ $$ \tilde \Ppol_G(0,\nu)~=(\nu-1)^{\vv(G)-1}\,\Tpol_G(1,\nu). $$ Returning to~\eqref{F-dual-Tutte} finally gives \begin{equation}\label{F-Potts} F(z,u)= u\sum_{M \ p-\rm{\small{angulation}} }(z/u)^{\vv(M)}\, \tilde \Ppol_M(0,u+1). \end{equation}
\subsection{Formal power series}
Let $A=A(z) \in \mathbb K[[z]]$ be a power series in one variable with coefficients in a field $\mathbb{K}$. We say that $A$ is \emm D-finite, if it satisfies a (non-trivial) linear differential equation with coefficients in $\mathbb{K}[z]$ (the ring of polynomials in $z$). More generally, it is \emph{D-algebraic}
if there exist a positive integer $k$ and a non-trivial polynomial $P \in \mathbb K[z,x_0,\dots,x_k]$ such that $P\left(z,A,\frac {\partial A} {\partial
z},\dots,\frac {\partial^k A} {\partial z^k}\right) = 0$.
A $k$-variate power series $A=A(z_1,\ldots, z_k)$ with coefficients in $\mathbb{K}$ is \emph{D-finite} if its partial derivatives (of all orders) span a finite dimensional vector space over $\mathbb{K}(z_1,\ldots, z_k)$.
\section{Generating functions for forested maps} \label{sec:eq}
Fix $p\ge 3$. We give here a system of equations that defines the generating function\ $F(z,u)$ of $p$-valent forested maps, or, more precisely, the series $zF'_z(z,u)$ that counts forested maps with a marked face. We also give
simpler systems for two variants of $F(z,u)$, involving no derivative.
\subsection{$p$-Valent maps}
\begin{theo} \label{thm:equations} Let $\theta$, $\Phi_1$ and $\Phi_2$
be the following doubly hypergeometric series: $$ \theta(x,y) = \sum_{i \geq 0} \sum_{j \geq 0} t^c_{2i+j} {2i + j
\choose i,i,j} x^i y^j, $$ \begin{equation}\label{phi} \Phi_1(x,y) = \sum_{i\geq 1} \sum_{j \geq 0} t_{2i+j} { 2i + j - 1 \choose i - 1,i,j} x^i y^j, \ \ \ \ \Phi_2(x,y) = \sum_{i\geq 0} \sum_{j \geq 0} t_{2i+j+1} { 2i + j \choose i,i,j} x^i y^j , \end{equation} where $t_k$ and $t^c_k$ are given by~\eqref{deftk} and ${a+b+c} \choose {a,b,c}$ denotes the trinomial coefficient $(a+b+c)!/(a!b!c!)$.
There exists a unique pair $(R,S)$
of power series in $z$ with constant term $0$ and coefficients in ${\mathbb Q}[u]$ that
satisfy \begin{eqnarray}
R &=& z + u\,\Phi_1(R,S),\label{defR} \\ S &=& u\,\Phi_2(R,S).\label{defS} \end{eqnarray}
The generating function $F(z,u)$ of
$p$-valent forested maps is characterized by
$F(0,u)=0$ and \begin{equation}\label{frs} F'_z(z,u) = \theta(R,S). \end{equation} \end{theo} \noindent{\bf Remarks}\\ 1. These equations allow us to compute the first terms in the expansion of $F(z,u)$, for any fixed $p\ge 3$. This is how we obtained~\eqref{F-3}.\\ 2. When $p$ is even, then $t_{2i+1}=0$ for all $i$. In particular, all terms occurring in the definition~\eqref{phi} of $\Phi_2$ are multiples of $y$, so that $S=0$. The simplified system reads: \begin{equation}\label{system-simple} F'_z(z,u) = \theta(R) \quad \quad \hbox{and}\quad \quad R = z + u\,\Phi(R), \end{equation} with $$ \theta(x) = \sum_{i \geq 0} t^c_{2i} {2i
\choose i} x^i \quad \quad \hbox{and}\quad \quad \Phi(x) = \sum_{i\geq 1} t_{2i} { 2i - 1 \choose i} x^i . $$
\\ 3. When $u=0$, an even more drastic simplification follows from~(\ref{defR}-\ref{defS}): not only $S=0$, but also $R=z$, so that~\eqref{frs} becomes an explicit expression of $F'_z$: $$ F'_z(z,0) =\sum_{i \geq 0} t^c_{2i} {2i \choose i}z^i, $$ or equivalently, \begin{equation}\label{Fz0} F(z,0) =\sum_{i \geq 0} t^c_{2i} {2i \choose i}\frac{z^{i+1}}{i+1} =\sum_{\ell \ge 1}\frac{p((p-1)\ell)!}{(\ell-1)! (1+(p-2)\ell/2)!
(2+(p-2)\ell/2)!} z^{2+(p-2)\ell/2}, \end{equation} where we require $\ell $ to be even if $p$ is odd. This series counts $p$-valent maps equipped with a spanning tree, and this expression was already proved by Mullin~\cite{mullin-boisees}. \\ 4. The series $\theta$ and $\Phi_i$ are explicited when $p=4$ and $p=3$ in Sections~\ref{sec:de4} and~\ref{sec:cubic-de}, respectively.
In order to prove Theorem~\ref{thm:equations}, we first relate $F(z,u)$ to the generating function\ of planar maps counted by the distribution of their vertex degrees. More precisely, let
$\bar M\equiv \bar M(z,u; g_1, g_2, \ldots ; h_1,h_2,\ldots)$ be the generating function of rooted planar maps,
with a weight $z$ per face, $u g_k$ per \emm non-root, vertex of degree $k$ and $h_k$ if the root vertex has degree~$k$. \begin{lem} \label{lem:cont} The series $F(z,u)$ is related to $M$ through: $$
F(z,u)= \bar M(z,u;t_1, t_2, \ldots; t^c_1, t^c_2,
\ldots).
$$ \end{lem}
\begin{figure}
\caption{(a) A 4-valent forested map with 9 faces and 2
non-root components. (b) The same map,
after contraction of the forest. (c) Assembling the 3 trees gives the
original forested map.}
\label{fig:forcou}
\end{figure}
\begin{proof} The idea is to contract each tree of a spanning forest, incident to $k$ half-edges, into a
$k$-valent vertex. It is adapted from~\cite[Appendix
A]{BDG-blocked}, where the authors study 4-valent forested maps for which
the root edge is not in the forest. It can also be
seen as an extension of Mullin's construction for maps equipped with
a spanning tree~\cite{mullin-boisees}. Finally, it
also appears in~\cite{sportiello}.
Let us now get into the details. First, let us recall that rooted maps have no symmetries: all vertices, edges and half-edges are distinguishable. In particular, one can fix, for every rooted planar map $M'$ (with arbitrary valences) a total order on its half-edges. This order may have a combinatorial significance --- a good choice is the order in which half-edges are visited when applying the construction of~\cite{bdg2002} --- but can also be arbitrary.
We now describe a bijection $\Phi$, illustrated in Figure~\ref{fig:forcou}, between forested $p$-valent maps $(M,F)$ and pairs formed of a map $M'$ and a collection $(T_v, v \in V(M'))$ of $p$-valent trees associated with the vertices of $M'$, such that the tree associated with the root vertex of $M'$ is corner-rooted, the others are leaf-rooted, and the number of leaves of $T_v$ is the degree of $v$ in $M'$.
The map $M'$ is obtained by contracting all edges of the forest $F$ (Figure~\ref{fig:forcou}(b)). The arrow that marks the root corner remains at the same place. Now split into two half-edges each edge of $M$ that is not in $F$: this gives a collection of $p$-valent trees, each of them being naturally associated with a vertex $v$ of $M'$. The half-edges of these trees form together the edges of $M'$ (Figure~\ref{fig:forcou}(c)). If $v$ is the root vertex of $M'$, then $T_v$ inherits the corner-rooting of $M$. Otherwise, we root $T_v$ at the smallest of its half-edges, for the total order on half-edges of $M'$.
The following properties are readily checked: \begin{itemize} \item $T_v$ has $k$ leaves if $v$ has degree $k$ in $M'$, \item $M$ and $M'$ have the same number of faces, \item the number of vertices of $M'$ is the number of components of
$F$. \end{itemize}
Let us now prove that $\Phi$ is bijective. To recover the forested map $(M,F)$ from the contracted map $M'$ and the associated collection of trees, we inflate each vertex $v$ of $M'$ into the corresponding tree $T_v$. If $v$ is the root vertex of $M'$, the root corner of $T_v$ must coincide with the root corner of $M'$. Otherwise, the root half-edge of $T_v$ is put on the smallest of the half-edges incident to $v$ in $M'$. This proves the injectivity of $\Phi$. Since this reverse construction can be applied to any map $M'$ with a corresponding collection of trees, $\Phi$ is also surjective. \end{proof}
\begin{proof}[Proof of Theorem~{\rm\ref{thm:equations}}] In a recent paper, Bouttier and Guitter~\cite{bg-continuedfractions} have expressed the series
$\bar M$ via a system of equations, established bijectively\footnote{Strictly
speaking, they do not take the vertex or face number into account,
but both are
prescribed by the distribution of vertex degrees.}. Their expression is actually fairly complicated~\cite[Eq.~(1.4)]{bg-continuedfractions}, but the series $z\bar M'_z$, which counts maps with a marked face, has a much simpler expression~\cite[Eq.~(2.6)]{bg-continuedfractions}: \begin{equation}\label{relM} \bar M'_z = \sum_{i \geq 0} \sum_{j \geq 0} h_{2i+j} {2i + j \choose i,i,j} R^i S^j, \end{equation} where $h_0=0$ and, by~\cite[Eq.~(2.5)]{bg-continuedfractions}, \begin{equation} R = z + u\sum_{i\geq 1} \sum_{j \geq 0} g_{2i+j} { 2i + j - 1 \choose i
- 1,i,j} R^i S^j, \ \ \ \ S =u \sum_{i\geq 0} \sum_{j \geq 0} g_{2i+j+1} { 2i + j \choose i,i,j} R^i S^j. \label{relS} \end{equation} Theorem~\ref{thm:equations} follows by specialization, using Lemma~\ref{lem:cont}.
It remains to check that~(\ref{defR}--\ref{defS}) defines a unique pair of series $R$ and $S$ in $z$ with constant terms $0$. This is readily proved by observing that~\eqref{defR} determines $R$ up to order $n$ if we know $R$ and $S$ up to order $n-1$; and that~\eqref{defS} determines $S$ up to order $n$ if we know $R$ up to order $n$ and $S$ up to order $n-1$. \end{proof}
\noindent{\bf Remark.} The expression of $\bar M$ given in~\cite[Eq.~(1.4)]{bg-continuedfractions} leads to an explicit expression of $F(z,u)$ in terms of $R$ and $S$. However, this expression involves a triple sum (a double sum when $p$ is even, see for instance~\eqref{Fzu-expl}).
This is why we prefer handling the expression of $F'$. We discuss this further in the final section.
\subsection{Quasi-$p$-valent maps ($p$ odd)}
A map is said to be \emph{quasi-$p$-valent} if all its vertices have degree $p$, apart from one vertex which is a leaf. Such maps exist only when $p$ is odd. They are naturally rooted at their leaf: the root corner is the unique corner incident to the leaf and the root edge is the unique edge incident to the leaf. Let $G(z,u)$ denote the generating function of quasi-$p$-valent forested maps counted by faces ($z$) and non-root components ($u$) (see Figure~\ref{fig:quasic}).
\begin{figure}
\caption{A quasi-cubic
forested map with 6
faces and 4 non-root components.}
\label{fig:quasic}
\end{figure}
\begin{prop}\label{prop:G} The generating function of quasi-$p$-valent forested maps is \begin{equation}\label{G-expr}
G(z,u) = ( 1 + \bar u) \left(zS - u \sum_{i \geq 2}\sum_{j \geq 0} t_{2i+j-1} { 2 i + j - 2 \choose i-2,i,j } R^i S^j \right), \end{equation} where $\bar u=1/u$, the series $R$ and $S$ are defined by~\rm{(\ref{defR}-\ref{defS})} and the numbers $t_k$ by \eqref{deftk}. Also, $$ G'_z(z,u)=(1+\bar u)S. $$ \end{prop}
As in the previous subsection, the key of this result is to relate $G(z,u)$ to a well-understood generating function\ of maps --- here, the generating function\ $\Gamma_1\equiv \Gamma_1(z,u;g_1, g_2, \ldots )$ that counts planar maps rooted at leaf, with a weight $z$ per face and $ug_k$ per $k$-valent \textit{non-root} vertex.
\begin{lem} The following analogue of Lemma~{\rm\ref{lem:cont}} holds for
quasi-$p$-valent forested maps: $$
G(z,u) = ( 1 + \bar u ) \, \Gamma_1 (z,u;t_1, t_2, \ldots) $$ with $\bar u=1/u$. \end{lem} \begin{proof}
The bijection used in the proof of Lemma \ref{lem:cont} shows that the series $\Gamma_1(z,u;t_1, t_2, \ldots)$ counts quasi-$p$-valent forested maps such that the root edge \emm is not, in the forest. (With the notation used in that proof, the root vertex of $M'$, of degree 1, remains a trivial tree during the inflation step). To each such forested map, we can add the root edge to the forest. The resulting forested map has one less component, hence the factor $\bar u=1/u$. \end{proof}
\begin{proof}[Proof of Proposition~{\rm\ref{prop:G}}] The series $\Gamma_1$ has also been expressed by Bouttier \emm et al., in terms of the series $R$ and $S$ of~\eqref{relS} (see~\cite[Eq.~(2.6)]{bdg2002}): \begin{equation}\label{G1} \Gamma_1= zS- u \sum_{i\ge2}\sum_{j\ge 0} g_{2i+j-1} {{2i+j-2} \choose
{i-2,i,j}}R^i S^j. \end{equation} This gives the first part of Proposition~\ref{prop:G}. For the second part, we observe that $\Gamma_1$ is by definition the coefficient of $h_1$ in the series $\bar M(z,u;g_1, \ldots ; h_1, \ldots)$ defined above Lemma~\ref{lem:cont}. Hence it follows from~\eqref{relM} that $\Gamma_1'=S$ (this can also be derived combinatorially from~\cite{bdg2002}). \end{proof}
\subsection{When the root edge is outside the forest}
\label{subsec:outside}
We now focus on
forested maps such that the root edge is outside the forest. Let $H(z,u)$ denote the associated generating function.
\begin{prop}\label{prop:H} The generating function
of $p$-valent forested maps where the root edge is outside the forest is \begin{multline}
H(z,u) =\bar u zR +\bar u zS^2 -\bar u z^2 \label{H-expr}\\ -2S \sum_{i \geq 2}\sum_{j \geq 0} t_{2i+j-1}
{ 2 i + j - 2 \choose i-2,i,j } R^i S^j -
\sum_{i \geq 3}\sum_{j \geq 0} t_{2i+j-2} { 2 i + j - 3 \choose i-3,i,j } R^i S^j, \end{multline} where $\bar u=1/u$, the series $R$ and $S$ are defined by~{\rm{(\ref{defR}-\ref{defS})}} and the numbers $t_k$ by~\eqref{deftk}.
When $p$ is even, then $S=0$ and the first double sum disappears.
In this case, we also have a very simple expression of $H'_z(z,u)$: \begin{equation}\label{Hprime-R} H'_z(z,u)= 2\bar u (R-z). \end{equation} \end{prop}
Again, the key of this result is to relate $H(z,u)$ to a well-understood generating function\ of maps --- here, the generating function\ $M\equiv M(z,u;g_1, g_2, \ldots )$ that counts rooted planar maps with a weight $z$ per face and $ug_k$ per vertex of degree $k$.
\begin{lem} The following analogue of Lemma~{\rm\ref{lem:cont}} holds: $$
H(z,u) = \bar u \, M(z,u;t_1, t_2, \ldots). $$
\end{lem} \begin{proof} Let us consider again the bijection used in the proof of Lemma~\ref{lem:cont}: the fact that the root edge of $M$ is not in the forest $F$ means that, in the corner-rooted tree associated with the root vertex of $M'$, the root half-edge is a leaf. It is then equivalent to root this tree at this leaf. \end{proof} \begin{proof}[Proof of Proposition~{\rm\ref{prop:H}}] The first part of the proposition follows from the known characterization of $M$ (see~\cite[Eq.~(2.1)]{bdg2002}): $$ M= \frac{\Gamma_1^2+\Gamma_2}{z} -z^2, $$ where $\Gamma_1$ is given by~\eqref{G1} and $$ \Gamma_2=z^2R-uz\sum_{i\ge 3}\sum_{j\ge 0}{{2i+j-3}\choose {i-3,i,j}} R^i S^j - u^2 \left( \sum_{i\ge2}\sum_{j\ge 0} g_{2i+j-1} {{2i+j-2} \choose
{i-2,i,j}}R^i S^j\right)^2, $$ with $R$ and $S$ satisfying~\eqref{relS}. This gives the first part of the proposition.
Observe that $M(z,u;g_1, g_2, \ldots)=u \bar M(z,u;g_1,g_2, \ldots; g_1,g_2, \ldots)$ where $\bar M$ is defined just above Lemma~\ref{lem:cont}. When $p$ is even, the maps obtained by contracting forests have even degrees ($g_{2k+1}=0$ for all $k$), the series $S$ given by~\eqref{relS} vanishes, and the combination of~\eqref{relM} and~\eqref{relS} gives $\bar M'_z(z,u;g_1,g_2, \ldots; g_1,g_2, \ldots)=2\bar u (R-z)$. Thus $H'_z= \bar u M'_z= \bar M'_z=2\bar u (R-z)$, as stated in~\eqref{Hprime-R}. \end{proof}
\section{Differential equations} \label{sec:de}
The equations established in the previous section imply that series counting regular forested maps are D-algebraic. We compute explicitly a few differential equations.
\subsection{The general case} \label{sec:da-general}
\begin{theo}\label{Dalg} The generating function $F(z,u)$ of $p$-valent forested maps is D-algebraic (with respect to $z$). The same holds for the series $G(z,u)$ and $H(z,u)$ of Propositions~\ref{prop:G} and~\ref{prop:H}. \end{theo}
\begin{proof} We start from the expression~\eqref{frs} of $F'(z,u)$ (as we always differentiate with respect
to $z$, we simply denote $F'(z,u)$ for $F'_z(z,u)$). We first observe that the doubly hypergeometric series $\theta$, $\Phi_1$, $\Phi_2$ are D-finite (this follows from the closure properties of D-finite power series~\cite{lipshitz-df}).
Then, by differentiating (\ref{defR}) and \eqref{defS} with respect to $z$, we obtain rational expressions of $R'$ and $S'$ in terms of $u$ and the partial derivatives $\partial \Phi_\ell/\partial x$ and $\partial \Phi_\ell/\partial y$, evaluated at $(R,S)$, for $\ell=1,2$. (Indeed, differentiating \eqref{defR} and \eqref{defS} gives a linear system in $R'$ and $S'$. Its determinant is a power series in $z$ with coefficients in ${\mathbb Q}[u]$. It is non-zero, since it equals $1$ at $u=0$.)
Let $\mathbb{K}$ be the field $\mathbb Q(u)$. Using~\eqref{frs} and the previous
point, it is now easy to prove by induction that for all $k\geq 1$,
there exists a rational expression of $F^{(k)}(z,u)$ in
terms of $$ \left\{\frac{\partial^{i+j}\Phi_\ell}{\partial x^i \partial
y^j}(R,S),\frac{\partial^{i+j}\theta}{\partial x^i \partial
y^j}(R,S)\right\}_{i \geq 0,j \geq 0, \ell \in \{1,2\}} $$ with coefficients in $\mathbb{K}$. But since $\theta$, $\Phi_1$ and $\Phi_2$ are D-finite, the above set of series spans a vector space of finite dimension $d$ over $\mathbb{K}(R,S)$. Therefore there exist $d$ elements $\varphi_1,\dots,\varphi_d$ in this space, and rational functions $A_k \in \mathbb{K}(x,y,x_1,\dots,x_d)$, such that $F^{(k)} (z,u) = A_k(R,S,\varphi_1,\dots,\varphi_d)$ for all $k \geq 1$.
Since the transcendance degree \cite[p.~254]{lang}
of
$\mathbb{K}(R,S,\varphi_1,\dots,\varphi_d)$ over $\mathbb{K}$ is (at most) $d+2$, the $d+3$
series $F^{(k)} (z,u)$, for $1\le k\le d+3$, are algebraically dependent, so that $F'$ (and thus $F$) is D-algebraic.
The proof is similar for the series $G(z,u)$ and $H(z,u)$, with $\theta$ replaced by the adequate D-finite series derived from~\eqref{G-expr} and~\eqref{H-expr}. Moreover, since these two expressions involve $z$ explicitly, the field ${\mathbb Q}(u)$ used in the above argument must be replaced by ${\mathbb Q}(z,u)$.
\end{proof}
\subsection{The 4-valent case} \label{sec:de4}
We specialize the above argument to the case $p=4$. As explained in the second remark following Theorem~\ref{thm:equations}, the series $S$ vanishes and $F'(z,u)$ is given by the system~\eqref{system-simple}, with \begin{equation}\label{theta-phi-4V}
\theta(x) = 4 \sum_{i \geq 2} \frac{(3i-3)!}{(i-2)!i!^2} x^i
\quad \hbox{and} \quad \Phi(x) = \sum_{i \geq 2}
\frac{(3i-3)!}{(i-1)!^2i!} x ^i. \end{equation} The series $\theta(x)$, $\Phi(x)$ and their derivatives lie in a 3-dimensional vector space over ${\mathbb Q}(x)$ spanned (for instance) by $1$, $\Phi(x)$ and $\Phi'(x)$. This follows from the following equations, which are easily checked: \begin{equation}\label{phi-second} x (27x-1 ) \Phi'' ( x )+6\Phi( x ) +6x = 0, \end{equation} \begin{equation} \label{thetrat}
3\theta(x)= 2(27x-1 ) \Phi'(x) -42\Phi(x) + 12x. \end{equation} By the argument described above, we can now express first $R'$, and then $F'$ and all its derivatives as rational functions of $u$, $R$, $\Phi(R)$ and $\Phi'(R)$. But since $R=z+u\Phi(R)$, this means a rational function of $u$, $z$, $R$ and $\Phi'(R)$. We compute the explicit expressions of $F'$, $F''$ and $F'''$, eliminate $R$ and $\Phi'(R)$ from these three equations, and this gives a differential equation of order 2 and degree 7 satisfied by $F'$, the details of which are not particularly illuminating: $$
\substack{9\,{F'}^{2}{F''}^{5}{u}^{6}+36\,{F'}^{2}{F''}^{3 }F'''\,{u}^{5}z+144\,{F'}^{2}{F''}^{4}{u}^{5}-12\,
\left( 21\,z-1 \right) F'\,{F''}^{5}{u}^{5}+432\,{F' }^{2}{F''}^{2}F'''\,{u}^{4}z-48\, \left( 24\,z-1 \right) F'\,{F''}^{3}F'''\,{u}^{4}z \\ +864\,{F'}^{2}{F'' }^{3}{u}^{4}-96\, \left( 27\,z-2 \right) F'\,{F''}^{4}{u}^{ 4}+4\, \left( 27\,z-1 \right) \left( 15\,z-1 \right) {F''}^{5}{u }^{4}+1728\,{F'}^{2}F''\,F'''\,{u}^{3}z-288\, \left( 21 \,z-2 \right) F'\,{F''}^{2}F'''\,{u}^{3}z \\ +10368\,F'\,{F'''}^{2}{u}^{2}{z}^{3}+16\, \left( 27\,z-1 \right) \left( 21\,z-1 \right) {F''}^{3}F'''\,{u}^{3}z+2304\,{F'}^{2}{ F''}^{2}{u}^{3}-288\, \left( 31\,z-4 \right) F'\,{F''}^{3}{u}^{3} \\ -64\, \left( 6\,uz-162\,{z}^{2}+33\,z-1 \right) {F''}^ {4}{u}^{3}+2304\,{F'}^{2}F'''\,{u}^{2}z-2304\, \left( 6\,z-1
\right) F' \,F''\,F'''\,{u}^{2}z \\ -192\, \left( 8\,uz-54\,{z}^{2}+29\,z-1 \right) {F''}^{2}F'''\,{u}^{2}z-768\,
\left( 2\,u+189\,z-7 \right) {F'''}^{2}u{z}^{3}+2304\,{F'}^ {2}F''\,{u}^{2}-3072\, \left( 3\,z-1 \right) F'\,{F''} ^{2}{u}^{2} \\-192\, \left( 24\,uz-27\,{z}^{2}+55\,z-2 \right) {F''} ^{3}{u}^{2} -1536\, \left( 21\,z-2 \right) F'\,F'''\,uz-768\,
\left( 12\,uz+81\,{z}^{2}+24\,z-1 \right) F''\,F'''\,uz+1536 \, \left( 9\,z+2 \right) F'\,F''\,u\\ -512\, \left( 39\,uz+81 \,{z}^{2}+51\,z-2 \right) {F''}^{2}u+36864\,F'\,z-1024\,
\left( 12\,uz-162\,{z}^{2}+33\,z-1 \right) F'''\,z-1024\, \left( 36\,uz+27\,z-1 \right) F''-24576\,z =0.} $$ As discussed in Section~\ref{sec:final}, we conjecture that $F$ does not satisfy a differential equation of order~2.
We have applied the same method to the series $H$ of Proposition~\ref{prop:H}: $$ H(z,u)=\bar u z R-\bar u z^2-\Lambda(R) $$ where $$ \Lambda(x)= \sum_{i\ge 3} \frac{(3i-6)!}{(i-3)!(i-2)!i!}x^i $$ satisfies $$ 30\Lambda(x)=x(27x-1) \Phi'(x)+(1-24x)\Phi(x)+3x^2. $$
This gives for $H$ an equation of order 2 and degree 3: \begin{multline*} 3\, ( u+1 ) {u}^{2}{H'}^{2}H'' + 12\,{u}^{2}zH' H'' +6\, ( u-8 ) u{H'}^{2} +240\,H \\ +4\, ( 6\,uz-54\,z+1 ) H'
+4\, (3\,u{z}^{2}+30\,uH +27\,{z}^{2}-z )H''+24\,{z}^{2}=0 . \end{multline*} One reason explaining this more modest size is the simplicity of the expression~\eqref{Hprime-R} of $H'$.
\subsection{The cubic case} \label{sec:cubic-de}
We start from the expression of $F'$ given in Theorem~\ref{thm:equations}. We now have to deal with series $\theta$, $\Phi_1$ and $\Phi_2$ in two variables: $$
\theta(x,y) = 3 \sum_{i \geq 0} \sum_{\substack{j \geq 0 \\ 2i+j \geq 3 }} {\frac { \left( 4\,i+2\,j-4 \right) !}{ \left( 2i+j-3 \right) !\,
i!^{2}j!}}
x^iy^j, $$ \begin{equation}\label{phi1}
\Phi_1(x,y) = \sum_{i \geq 1}\sum_{\substack{j \geq 0 \\ 2i+j \geq 3 }} \frac{(4i+2j-4)!}{(2i+j-2)!\,(i-1)!\,i!\,j!}x^iy^j, \end{equation} \begin{equation}\label{phi2}
\Phi_2(x,y) = \sum_{i \geq 0}\sum_{\substack{j \geq 0 \\ 2i+j \geq 2 }} \frac{(4i+2j-2)!} {(2i+j-1)!i!^2j!}x^iy^j. \end{equation} Let us first observe that $$ \theta(x,y)=-2\Phi_1(x,y)+(1-y)\Phi_2(x,y)-2x-y^2. $$ Consequently, Theorem~\ref{thm:equations} gives: \begin{equation} \label{F-cubic} F'=2z\bar u +\bar u S -(1+\bar u)(2R+S^2). \end{equation} Then, the summations over the variable $j$ that occur in $\Phi_1$ and $\Phi_2$ can be performed explicitly, which gives to the cubic case a one-variable flavour. Indeed, \begin{eqnarray}
\Phi_1(x,y) &=& \left( 1-4y \right) ^{3/2}\,{\Psi_1} \left( t\right) -x,\label{pp1} \\ \Phi_2(x,y) &=& \sqrt {1-4y}\,{\Psi_2} \left(t \right) + \frac 1 4\left({1-\sqrt{1-4y}}\right)^2, \label{pp2}
\end{eqnarray} where $t=x/(1-4y)^2$ and $$ \Psi_1(z) = \sum _{i\ge 1}{\frac { \left( 4\,i-4 \right) !}{
\left( 2\,i-2 \right) !\,i!\, \left( i-1 \right) !}}\,{z}^{i}, \quad \Psi_2(z) = \sum_{i\ge 1}{\frac { \left( 4\,i-2 \right) !}{
\left( 2\,i-1 \right) !\, i! ^{2}}}\,{z}^{i}.
$$
Our system thus reads: \begin{eqnarray}
\label{phi1enpsi} R &=&z+u \left( 1-4\,S \right) ^{3/2}\,{\Psi_1} \left(
T\right) -uR, \\
\label{phi2enpsi} S&=&u \sqrt {1-4S}\,{\Psi_2} \left(
T
\right) + \frac u 4\left({1-\sqrt{1-4S}}\right)^2,
\end{eqnarray} with $T=R/(1-4S)^2$. \begin{comment} \begin{equation} \label{thetaenlambda} \theta(x,y) = \sqrt {1-4y} \left( y \Lambda_1 \left( {\frac {x}{ \left( 1-4 \,y \right) ^{2}}} \right) + \Lambda_2 \left( {\frac {x}{ \left( 1-4 \,y \right) ^{2}}} \right)\right)+\left(\frac{1-\sqrt{1-4y}}2\right)^3, \end{equation} \begin{equation} \label{phi1enpsi} \Phi_1(x,y) = \left( 1-4\,y \right) ^{3/2}\,{\Psi_1} \left( {\frac {x}{ \left( 1-4 \,y \right) ^{2}}} \right) -x \end{equation} \begin{equation} \label{phi2enpsi} \Phi_2(x,y) = \sqrt {1-4y}\,{\Psi_2} \left( {\frac {x}{ \left( 1-4\,y \right) ^{2} }} \right) + \left(\frac{1-\sqrt{1-4y}}2\right)^2 \end{equation} with \end{comment}
The series $\Psi_1(z)$, $\Psi_2(z)$ and their derivatives lie in a 3-dimensional vector space over ${\mathbb Q}(z)$ spanned (for instance) by 1, $\Psi_1(z)$ and $\Psi_2(z)$. This follows from the following identities, which are easily checked: \begin{equation}\label{ed-cubic} (1-64z) \Psi_1'(z)+ 48\Psi_1(z)+ 2\Psi_2(z)=1 ,
\quad \quad z (1-64 z) \Psi_2'(z)+ 6\Psi_1(z)+ 16 z\Psi_2(z)=8 z. \end{equation}
By the argument of Section~\ref{sec:da-general}, we can now express $R'$ and $S'$ as rational functions of $u$, $R$, $S$, $\Psi_1(T)$ and $\Psi_2(T)$.
But $\Psi_1(T)$ and $\Psi_2(T)$ can be expressed rationally in terms of $z$, $u$, $R$ and $\sqrt{1-4S}$ using~\eqref{phi1enpsi} and~\eqref{phi2enpsi}. Hence we obtain rational expressions in $u$, $z$, $R$ and $\sqrt{1-4S}$. In fact no square root occurs: \begin{eqnarray}
R'&=& \frac{R(48z-1+16(u+1)R+2(3+u)S-8(u+1)S^2)}{D} ,\nonumber \\ S'&=&\frac{2(3z+(u-3)R-12zS+4(u+1)RS)}D,\nonumber \end{eqnarray} with $$ D=36z^2+(24z-1+24uz)R+4(u+1)RS-4(u+1)^2RS^2+4(u+1)^2R^2.
$$ Combining these two equations with~\eqref{F-cubic}, one can now express $F'$, $F''$ and $F'''$ in terms of $u$, $z$, $R$ and $S$, and then eliminate $R$ and $S$ to obtain a differential equation of order 2 satisfied by $F'$ (of degree 17).
For the generating function\ $G(z,u)$ of quasi-cubic forested maps (Proposition~\ref{prop:G}), we replace~\eqref{F-cubic} by $$ 10G= (1+\bar u) \left(z-R+6zS+2(u+1)RS\right), $$ and obtain a differential equation of order 2 and degree 5. It becomes a bit simpler when we rewrite $G=(W+z\bar u)/2$: \begin{multline*}
0= \left( 3\,{u}^{4}z {{W}'} ^{4}-{u}^{3} ( 5\,W
u-uz+z ) {{W}'} ^{3}+4\, ( u+1 ) (
5\,W u-uz+z ) ^{2} \right) {W}''\\
-48\,{u}^{2}z ( u+1) {{W}'} ^{3}+8\,u ( u+1 ) ( 5\,W u-uz+z ) {{W}'} ^{2}+4\, ( u^2-1) ( 5\,W u-uz+z ){{W}'}. \end{multline*}
Introducing the series $W$ is natural in the solution of the
Potts model presented in~\cite{bernardi-mbm-de}, where the above
equation was first obtained.
\section{Combinatorics of forested trees} \label{sec:u+1}
Equation~\eqref{F-Tutte}, and the positivity of the Tutte coefficients, show that the series $F(z,u)$ that counts $p$-valent forested maps has non-negative coefficients when expanded in $(1+u)$. We say that it is \emm $(u+1)$-positive,. Section~\ref{sec:tutte} presents several combinatorial descriptions of $F(z,\mu-1)$ (see~\eqref{F-act},~\eqref{F-dual},~\eqref{F-sandpile},~\eqref{F-Potts}).
This will lead us to study the asymptotic behaviour of the coefficient of $z^n$ in $F(z,u)$ not only for $u\ge 0$, but for $u\ge -1$.
In this study, we will need to know that several other series
are also $(u+1)$-positive. We prove this thanks to a combinatorial argument that applies to certain classes of \emm forested trees,.
\subsection{Positivity in $(1+u)$} \label{sec:++}
Let $T$ be a tree having at least one edge, and $\mathcal F$ a set of spanning forests of $T$. We define a property of $\mathcal F$ that guarantees that the
generating function $A_\mathcal F(u)$ that counts forests of $\mathcal F$ by the
number of components
is $(u+1)$-positive (after division by $u$).
Let $F\in \mathcal F$, and let $e$ be an edge of $T$. By \emm flipping, $e$ in the forest $F$, we mean adding $e$ to $F$ if it is not in $F$, and removing it from $F$ otherwise. This gives a new forest $F'$ of $T$. We say that $e$ is \emm flippable, for $F$ if $F'$
still belongs to $\mathcal F$. We say that $\mathcal F $ is \emph{stable} if for each $F \in \mathcal F$, \begin{enumerate} \item[(i)] every edge of $T$ not belonging to $F$ is flippable, \item[(ii)] flipping a flippable edge gives a new forest with the same set of flippable edges. \end{enumerate} We say that a set $\mathcal E$ of forested trees is \emm stable, if, for each tree $T$, the set of forests $F$ such that $(T,F)\in \mathcal E$ is stable. We consider below a generating function\ $E(z,u)$ of $\mathcal E$, where each forested tree $(T,F)$ is weighted by $z^n u^k$, where $n$ is the size of $T$ (number of edges, of leaves...) and $k$ the number of components of $F$, minus $1$. \begin{lem} \label{pose} With the above notation, assume $\mathcal F $ is stable. Then all elements of $\mathcal F$ have the same number, denoted by $f$, of flippable edges. The generating function\ of forests of $\mathcal F$, counted by components, is $ A_\mathcal F(u)= u(1+u)^{f}. $ Consequently, if $\mathcal E$ is a stable set of forested trees, then $E(z,u)$ is $(u+1)$-positive. \end{lem}
\begin{proof} Condition (i) implies that the forest $F_{\max}$
consisting of all edges of $T$ belongs to~$\mathcal F$. Moreover, we can
obtain $F_{\max}$ from any forest $F$ of $\mathcal F$ by adding iteratively
flippable edges. By Condition (ii), this implies that any forest $F$ of
$\mathcal F$ has the same set of flippable edges as $F_{\max}$. It also implies
that, to construct a forest $F$ of $\mathcal F$, it suffices to choose, for each
flippable edge of $F_{\max}$, whether it belongs to $F$ or not. Since
$F_{\max}$ has a unique component, and since deleting an edge from a forest
increases by 1 the
number of components, the expression of $A_\mathcal F(u)$ follows. \end{proof}
\subsection{Enriched blossoming trees} \label{sec:enriched}
We now apply the above principle to establish $(u+1)$-positivity properties for the series $R$ and $S$ given by~\rm{(\ref{defR}--\ref{defS})}, and for the series $\tilde S\equiv \tilde S(z,u)$ defined by \begin{equation} \label{defStilde} \tilde S(0,u)=0, \quad \tilde S = u\,\Phi_2(z,\tilde S), \end{equation} where $\Phi_2$ is given by~\eqref{phi}.
We consider
plane trees rooted at a half-edge, which we draw hanging from their root as in Figure~\ref{enrichi}.
A vertex of degree $d$ is thus seen as the parent of $d-1$ children. A \emm subtree, consists of a vertex $v$ and all its descendants. It is naturally rooted at the half-edge incident to $v$ and located just above it. A \emm blossoming tree, is a leaf-rooted plane tree with two
kinds of childless vertices: \emm{leaves},, represented by white arrows, and \emm{buds},, represented by black arrows. The edges that carry leaves and buds
are considered as half-edges. (This means that leaves and buds are not
actual nodes of the tree, so that a spanning forest of
a blossoming tree does not contain any of its half-edges.) The root half-edge does not carry any leaf or bud. Each leaf is assigned a \emm charge, $+1$ while each bud is assigned a charge $-1$. The \emm charge, of a blossoming tree is the difference between the number of leaves and buds that it contains. This definition is extended to subtrees.
\begin{defi}\label{def:enriched} Let $p\ge 3$. A $p$-valent blossoming tree $T$ equipped with a spanning forest $F$ is an \emm enriched R- (resp. S-) tree, if \begin{enumerate} \item[(i)] its charge is $1$ (resp. $0$), \item[(ii)] any subtree rooted at an edge \emm not in, $F$ has charge $0$ or $1$. \end{enumerate} We also consider as an enriched R-tree a single root half-edge carrying at a leaf (Figure~\ref{enrichi}, left).
The pair $(T,F)$ is an \emm enriched \~S-tree, if each component of $F$ is incident to as many leaves as buds. In this case it is also an enriched S-tree. \end{defi} An enriched R-tree is shown in Figure~\ref{enrichi}. The readers who are familiar with the R- and S-trees of~\cite{bdg2002} will recognize that our enriched R- and S-trees are obtained from them by inflating each vertex of degree $k$ into a (leaf rooted) $p$-valent tree with $k$ leaves. The following proposition should not come as a surprise for them. \begin{prop}\label{prop:RSSt} Let $p\ge 3$. The series $R$, $S$ and $\tilde S$ defined by~\eqref{defR},~\eqref{defS} and~\eqref{defStilde}
count respectively enriched R-, S- and \~S-trees, by the number of leaves
($z$) and the number of components in the forest ($u$). \end{prop} \begin{proof}
The equations follow from a recursive decomposition of enriched trees. For instance,
an enriched R-tree is either reduced to a single leaf, with no forest at all (contribution: $z$),
or contains a root node. This node belongs to a component of the forest. This component is incident to several
half-edges (not belonging to the forest), one of them being the
root half-edge. Each of the other incident half-edges can be a leaf, a bud, or
the root of a non-trivial subtree. In this case, the definition of
enriched R-trees implies that this subtree is itself
an enriched R-tree (of charge~1) or an enriched S-tree (of charge
0). Since a single leave is considered as an R-tree, we can say that
every half-edge incident to the root component of the forest carries
either a bud, or an R- or S-tree. If there are $i$ attached
R-trees, we must have $i-1$ buds for the tree charge to be
1, and an arbitrary number $j$ of
S-trees. The root component of the forest is then a leaf-rooted tree
with $k=2i+j$ leaves. This gives~\eqref{defR}, where the multinomial
coefficient occurring in $\Phi_1$ describes the order in which the $i$ R-trees, the $i-1$
buds and the $j$ S-trees are organized.
The proof of~\eqref{defS} is similar, but now as many buds as R-trees must be attached to the root component of the forest to make the charge 0.
Finally, an \~S-tree is an S-tree in which all attached R-trees are actually leaves. This explains that~\eqref{defStilde} is obtained from~\eqref{defS} by replacing each occurrence of $R$ by $z$. \end{proof}
\begin{figure}
\caption{\emph{Left:} The smallest enriched R-tree. \emph{Right:} An
enriched 5-valent R-tree having 10 leaves (white; charge
$+1$) and 9 buds (black; charge $-1$). }
\label{enrichi}
\end{figure}
We now come back to $(u+1)$-positivity. \begin{prop} The set of $p$-valent enriched R- (resp.~S-, \~S-) trees having at least one edge
is stable, in the sense of Section~{\rm\ref{sec:++}}.
\end{prop} \begin{proof}
For enriched R- and S-trees, an edge is flippable if and only if the attached subtree has charge 0 or 1, and this condition is independent of the forest.
For enriched \~S-trees, an edge is flippable if and only if the attached subtree is incident to as many leaves as buds, and this condition is again independent of the forest. \end{proof}
By combining this proposition with Lemma~\ref{pose} and Proposition~\ref{prop:RSSt}, we obtain: \begin{cor} \label{positiv} The series $\bar u({R-z})$, $ \bar u S$ {and} $\bar u{\tilde S}$ are $(u+1)$-positive. When $u=\mu-1$, they count respectively (non-empty) enriched R-, S- and \~S-trees, by the number of leaves
($z$) and the number of flippable edges ($\mu$). \end{cor}
\noindent When $p=3$ for instance, we have, writing $\mu=u+1$, \begin{eqnarray*}
\bar u(R-z)&=& 2(2\mu+1)z^2+ 4(10\mu^3+12\mu^2+9\mu+4)z^3 + O(z^4), \\ \bar u S&=& 2z+6(2\mu^2+2\mu+1)z^2+ 8 (16\mu^4+28\mu^3+30\mu^2+22\mu+9)z^3+ O(z^4), \\ \bar u \tilde S &=&2z+2(2\mu^2+8\mu+5)z^2+8(2\mu^4+12\mu^3+33\mu^2+40\mu+18)z^3 + O(z^4). \end{eqnarray*} We will also need the following variant of these results.
\begin{lem} \label{positiv-bis} Define $\Phi_2$ by~\eqref{phi} and ${\tilde S}$ by~\eqref{defStilde}. The series $\pd {\Phi_2} y (z, \tilde S)$ is $(u+1)$-positive. \end{lem} \begin{proof}
Let us extend the definition of \~S-enriched trees to $p$-valent blossoming trees that, in addition to leaves and
buds, contain also one dangling half-edge, having no charge
(Figure~\ref{fig:dang}). Using the arguments of Proposition~\ref{prop:RSSt}, one can prove that the series
$u\pd {\Phi_2} y (z, \tilde S)$ counts such \~S-enriched trees for which
the half-edge is incident to the root component (as before, $z$
counts leaves and $u$ components).
\begin{figure}
\caption{A cubic enriched \~S-tree with a dangling half-edge incident to the
root component.}
\label{fig:dang}
\end{figure}
The set of such trees is again stable:
indeed, an edge is flippable if it is flippable in the \~S-sense, and is not on the path from the root to the dangling half-edge. \end{proof}
\section{Implicit functions: some general results} \label{sec:general}
The singular behaviour of a series $Y(z)$ defined by an implicit equation $H(z,Y(z)) = 0$ is well-understood when the singularities of $Y$ occur at a point $z$ such that $H$ is analytic at $(z,Y(z))$, but $H'_y(z, Y(z))=0$. A typical situation is the so-called \emm smooth implicit schema, of~\cite[Sec.~VII.4]{flajolet-sedgewick}, which leads
to square root singularities in $Y$.
However, in our asymptotic analysis of 4-valent and cubic forested maps, we will have to deal with implicit functions $Y$ that become singular at a point $z$ such that $H$ ceases to be analytic at $(z,Y(z))$. Our series $Y$
have non-negative real coefficients, which implies that their radius is also a dominant singularity, and leads us to pay a special attention to the behaviour of $Y$ along the positive real axis.
In this section, we thus examine how far a real series $Y$ defined by an implicit equation can be extended along the positive real axis. We first establish a general result for equations of the form $H(z,Y(z)) = 0$ (Proposition~\ref{lemmeasympt2}), which will apply for instance to the series $\tilde S$ defined by~\eqref{defStilde}. We then specialize this proposition to an \emm inversion equation, of the form $\Omega(Y(z))=z$ (Corollary~\ref{lemmeasympt}). This corollary will apply in particular to the series $R$ defined, in the 4-valent case, by $R=z+u\Phi(R)$ (see~\eqref{system-simple}).
\begin{prop} \label{lemmeasympt2} Let $H(x,y)$ be a real bivariate power series, analytic in a neighbourhood of $(0,0)$, satisfying $H(0,0) = 0$ and $ H'_y (0,0) > 0$. Let $Y\equiv Y(z)$ be the unique power series satisfying $Y(0)=0$ and $H(z,Y(z)) = 0$. Then $Y$ has a non-zero radius of convergence. Moreover, there exists $\rho > 0$ such that: \begin{enumerate}
\item[(a)] $Y$ has an analytic continuation, still denoted by
$Y$, in a neighbourhood of $[0,\rho]$, which is real valued,
\item[(b)]
$H$ has an analytic continuation, still denoted by $H$, in a neighbourhood
of $\{(z,Y(z)), z \in [0,\rho]\}$ ,
\item[(c)] $H(z,Y(z)) = 0$ for $z \in [0,\rho]$,
\item[(d)] $H'_ y (z,Y(z)) > 0$ for $z \in [0,\rho]$. \end{enumerate} Moreover, if $\tilde \rho$ is the supremum (in $\mathbb R\cup \{+\infty \}$) of these values $\rho$, at least one of the following properties holds: \begin{itemize} \item[(i)] $\tilde \rho = +\infty$, \item[(ii)] $\liminf_{z \rightarrow \tilde \rho^-} H'_y (z,Y(z)) = 0$, \item[(iii)] for each $y \in [\liminf_{z \rightarrow \tilde \rho^-}
Y(z),\limsup_{z \rightarrow \tilde \rho^-} Y(z)]$, $H$ is singular at $(\tilde \rho,y)$,
\item[(iv)] $\limsup_{z \rightarrow \tilde \rho^-} |Y(z)| = + \infty$. \end{itemize} \end{prop} We begin with a simple lemma. \begin{lem}\label{lem:real} Let $a<0<b$ and let $Y$ be a function analytic in a neighbourhood of $[a,b]$, whose Taylor expansion
at $0$ has real coefficients. Then $Y$ takes real values on $[a,b]$. \end{lem} \begin{proof} The functions $z\mapsto Y(z)$ and $z\mapsto \overline{ Y(\bar z)}$ are analytic and coincide in the neighbourhood of 0 where $Y(z)$ is given by its Taylor expansion. Hence they coincide everywhere, and $Y(z)$ is real when $z$ is real. \end{proof}
\begin{proof}[Proof of Proposition~\ref{lemmeasympt2}] The uniqueness of $Y$ comes from the fact that its coefficients can be computed by induction using the equation $H(z,Y(z))=0$ and the initial condition $Y(0)=0$ (the assumption $H'_y(0,0)\not =0$ is crucial here). Note that these coefficients are real, so that Lemma~\ref{lem:real} will apply. But let us first prove that $Y$ has a positive radius of convergence. Since $H'_y (0,0) > 0$, the analytic implicit function theorem at $z=0$ implies the existence of a locally analytic solution $\hat Y$ to the implicit equation $H(z,\hat Y(z)) = 0$ satisfying $\hat Y(0)=0$. The expansion of $\hat Y$ around $0$ must satisfy this equation as well (in the world of formal power series), and thus coincides with $Y$. Hence $Y$ has a positive radius.
Now consider the set $$
I = \left\{\rho >0 \ \left|\ \rho\textrm{ satisfies conditions
(a), (b), (c), (d)} \right.\right\}. $$ This is clearly an open interval of the form $(0, \tilde \rho)$, and it is non-empty since (a), (b), (c) and (d) hold in the neighbourhood of 0.
Assume that none of the properties (i), (ii), (iii) and (iv) hold at $\tilde \rho$. In particular, $\tilde \rho$ is finite. We will reach a contradiction by proving that $\tilde \rho \in I$.
Since (iv) does not hold, $Y$ is bounded on $[0,\tilde \rho)$. By continuity, the set of accumulation points of $\{Y(z), z \in [0,
\tilde \rho)\}$ is an interval, which coincides with $[\liminf_{z\rightarrow \tilde \rho^-}
Y(z),\limsup_{z\rightarrow \tilde \rho^-} Y(z)]$. For each $y$
in this interval, the point $(\tilde \rho,y)$ is in the closure of the
set $\{(z,Y(z)), z \in [0,\tilde \rho)\}$ where $H$ is known to be
analytic. Since (iii) does not hold, there exists an element $\tilde y $
in this interval such that $H$ is analytic at
$(\tilde \rho,\tilde y )$.
In particular, it is continuous at this point, and (c) implies that $H(\tilde \rho, \tilde y )=0$.
Finally, since (d) holds, but (ii) does not, $H'_ y (\tilde \rho,\tilde y ) > 0$.
These three properties allow us to apply the analytic implicit function
theorem: there exists an analytic function $\tilde Y$ defined in a
neighbourhood of $\tilde \rho$ such that $H(z,\tilde Y(z)) = 0$ and $\tilde
Y(\tilde \rho)=\tilde y $. We want to prove that $\tilde Y$ is an analytic
continuation of $Y$ at $\tilde \rho$, so that, in particular, the interval
$[\liminf_{z\rightarrow \tilde \rho^-} Y(z),\limsup_{z\rightarrow \tilde \rho^-} Y(z)]$ is reduced to the point
$\tilde y$.
Since $H'_y(\tilde \rho,\tilde y )>0$, there exists $\delta>0$ and a complex neighbourhood $V$ of $(\tilde \rho,\tilde y )$ such that for $(x,y)$ and $(x,y')$ in $V$, $$
|H(x,y)-H(x,y')|\ge \delta |y-y'|. $$ We can also assume that $\tilde Y(x)$ is well-defined for $(x,y)\in V$.
Since $(\tilde \rho,\tilde y )$ is an accumulation point of $\{(z, Y(z)), z \in (0, \tilde \rho)\}$, and $Y$ is continuous, there exists an interval $[z_0,z_1]\subset (0, \tilde \rho)$ such that $(z,Y(z))\in V$ for $z\in [z_0,z_1]$. Then for $z$ in this interval, $$
0=|H(z,Y(z))-H(z,\tilde Y(z))|\ge \delta |Y(z)-\tilde Y(z)|, $$ which shows that the analytic functions $Y$ and $\tilde Y$ coincide on $[z_0,z_1]$. So they coincide where they are both defined, and $\tilde Y$ is an analytic continuation of $Y$ at $\tilde \rho$. This tells us that (a) holds at $\tilde \rho$. Now (b) also holds by the choice of $\tilde y$, (c) holds by construction of $\tilde Y$, and (d) holds as well, as argued above. Thus $\tilde \rho$ belongs to $I$, which cannot be true since it is the supremum of the open interval $I$. Hence one of the properties (i), (ii), (iii) and (iv) must hold. \end{proof}
\begin{comment} Let us do some remarks before taking on the proof:
1. The proposition can be simplified when $y$ is continuous at $\rho$, or locally monotonous, which occurs most of the time. Indeed, in this case, we can change points (ii) and (iii) by: \begin{itemize} \item[(ii)] $\pd h y (\rho,y(\rho)) = 0$, \item[(iii)] $h$ is singular at $(\rho,y(\rho))$. \end{itemize}
\end{comment}
\begin{cor} \label{lemmeasympt} Let $\Omega(y)$ be a real power series such that $\Omega(0) = 0$ and $\Omega'(0) >0$. Let $\omega\in (0, +\infty ]$ be the first singularity of $\Omega$ on the positive real axis, if it exists, and $+\infty$ otherwise.
Let $Y\equiv Y(z)$ be the unique power series satisfying $Y(0)=0$ and $\Omega(Y(z)) = z$. Then $Y$ has a non-zero radius of convergence. Moreover, there exists $\rho \in (0, \infty]$ such that: \begin{enumerate}
\item $Y$ has an analytic continuation, still
denoted by $Y$, in a neighbourhood of $[0,\rho)$, which is real valued,
\item $Y$ is increasing on $[0,\rho)$, \item $Y(z) \in [0, \omega)$ for $z \in [0,\rho)$,
\item $\Omega(Y(z)) = z$ for $z \in [0,\rho)$,
\item $\lim_{z\rightarrow \rho ^- }Y(z)= \tau$ and $\lim_{y\rightarrow \tau^-}\Omega(y)= \rho$, \end{enumerate} where $$
\tau = \min \{ y \in [0,\omega)\ |\ \Omega'(y) = 0\} $$ if this set is non-empty, and $\tau=\omega$ otherwise. \end{cor} \noindent This result is stated as an existence result for $\rho$, but~(5) actually \emm determines, the value of $\rho$. \begin{proof} We specialize Proposition~\ref{lemmeasympt2} to $H(x,y)= \Omega(y) -x$. Clearly, $H$ is analytic around $(0,0)$, $H(0,0)=0$ and $H'_y(0,0)=\Omega'(0)>0$. We take for $\rho$ the value $\tilde \rho$ of Proposition~\ref{lemmeasympt2}. Then (1)
follows from (a).
Conditions (b) and (c) tell us that $\Omega$ has an analytic continuation on $\{Y(z), z \in [0, \rho)\}$, such that $\Omega(Y(z))=z$ for $z\in [0, \rho)$. By differentiating this identity, we obtain $Y'(z) \Omega'(Y(z))=1$, so that (2) now follows from (d). Thus the existence of an analytic continuation of $\Omega$ on $\{Y(z), z \in [0, \rho)\}$ now translates into (3).
The monotonicity of $Y$ also allows us to define $Y(\rho):=\lim_{z\rightarrow \rho ^- }Y(z)$, which is not necessarly finite.
Let us now derive (5) from the second series of properties of Proposition~\ref{lemmeasympt2}. We have already seen (this is (3)) that $Y(\rho)\le \omega$. By Condition (d) of Proposition~\ref{lemmeasympt2}, and by definition of $\tau$, the value $Y(\rho)$ is also less than or equal to $\tau$. Assume $Y(\rho) < \tau$. Then $\Omega$ is analytic at $Y(\rho)$, and by continuity of $\Omega$ and $Y$, $\rho=\Omega(Y(\rho))<+\infty$, so that (i) cannot hold. By definition of $\tau$, we cannot have
(ii). It is easy to see that Conditions (iii) and (iv) do not hold either. So we have reached a contradiction, and $Y(\rho) = \tau$. Returning to (4) gives $\rho = \Omega(Y(\rho)) = \Omega(\tau)$. \end{proof}
\section{Inversion of functions with a $z\ln z$ singularity} \label{sec:inversion}
The inversion of a locally injective analytic function is a well-understood topic: if $\Psi$ is analytic in the disk $C_s$ of radius $s$ centered at $0$ and $ \Psi(z)\sim z$ as $z\rightarrow 0$, then there exist $\rho \in (0, s)$ and $\rho'>0$, and a function $\Upsilon$ analytic on $C_\rho '$, taking its values in $C_\rho $, such that $$ \forall (y,z) \in C_\rho '\times C_\rho , \quad \quad \Psi(z)=y \Longleftrightarrow z= \Upsilon(y). $$ The aim of this section is to see to what extent this can be generalized to a function $\Psi(z)$ having a singularity in $z \ln z$ around $0$. Of course we cannot consider disks anymore, and our local domains will be of the following form: $$ D_{\rho , \alpha}:= \{z =re^{i\theta}: r \in (0,\rho )\ \hbox{ and } \
|\theta|<\alpha\}. $$ \begin{theo}[{\bf Log-Inversion}]
\label{lninversion} Let $\Psi$ be analytic on $D_{s,\pi}$ for some $s > 0$. Assume that as $z$ tends to $0$ in this domain, $$ \Psi(z) \sim -c z \ln z $$ with $c>0$. Then for each $\alpha \in (0,\pi)$, there exist $\rho \in( 0,s)$ and $\rho ' > 0$, and a function $\Upsilon$ analytic in $ D_{\rho ',\alpha}$, taking its
values in $D_{\rho , \pi}$, that satisfies $$ \forall (y,z) \in D_{\rho ', \alpha} \times D_{\rho , \pi}, \quad \quad \Psi(z)=y \Longleftrightarrow z= \Upsilon(y). $$
Moreover, as $y\rightarrow 0$ in $ D_{\rho ',\alpha}$, $$ \Upsilon(y) \sim - \frac y{c \ln y}. $$ \end{theo} The proof is rather long. The most difficult part is to prove the existence of a unique preimage of $y$ under $\Psi$ in $
D_{\rho , \pi}$, for each $y\in D_{\rho',\alpha}$
(Lemma~\ref{psinj}). This preimage is of course
$\Upsilon(y)$. Proving the analyticity of $\Upsilon$ is then a
simple application of the analytic implicit function theorem. In
order to prove Lemma~\ref{psinj}, we first study the injectivity and
surjectivity of the function $H: z\mapsto -z \ln z$ around
0 (Section~\ref{sec:H}), before transferring them to the function~$\Psi$
(Section~\ref{sec:H-Psi}).
\subsection{The function $z \mapsto -z\ln z$} \label{sec:H}
Consider the following function $$ \begin{array}{lcccllll}
H : &\mathbb C\setminus \mathbb R^-& \rightarrow &\mathbb C \\ &z &\mapsto &-z\ln z, \end{array}$$ where $\ln$ denotes the principal value of logarithm: if $z=re^{i\theta}$ with $r>0$ and $\theta \in (-\pi, \pi)$, then $\ln z= \ln r+i\theta$. We also define $\Arg z:=\theta$. Let us begin with a few elementary properties of $H$. \begin{lem} \label{argH} The function $H$ satisfies \begin{equation}\label{H-conj} H(\bar z) =\overline {H(z)}. \end{equation} For $z=re^{i\theta}$ with $r>0$ and $\theta \in (-\pi, \pi)$, \begin{equation}\label{module-H}
|H(z)|=
r \sqrt{\ln ^2 r + \theta^2}. \end{equation} The arguments of $z$ and $-\ln z$ have opposite signs. If in addition $r<1$, then $\Arg(-\ln z)\in (-\pi/2, \pi/2)$. Hence \begin{equation}\label{H-a}
\Arg H(z) = \Arg z + \Arg(-\ln z) = \theta + \arctan\left(\frac{ \theta}{\ln{r}}\right). \end{equation} If in addition $r \leq 1/ \sqrt e$, then \begin{equation}\label{4e}
|\Arg H(z)| \leq \theta. \end{equation} In particular, $H(z)\not \in \mathbb R^-$. \end{lem} \begin{proof} The first two identities are straightforward. The first part of~\eqref{H-a} follows from the fact that the arguments of $z$ and $-\ln z$ have opposite signs. The second part follows from $\Arg (-\ln z)\in (-\pi/2, \pi/2)$. Let us now prove~\eqref{4e}. Assume $\theta\ge 0$. Then $\Arg(-\ln z)\le 0$ and the first part of~\eqref{H-a} gives $\Arg H(z)\le \theta$.
Moreover, $\arctan x\ge x $ if $x\le 0$, and thus by the second part of~\eqref{H-a}, $$ \Arg H(z) \geq \theta + \frac{
\theta}{\ln{r}}
\geq \left(1-\frac{1} {\ln{(1/\sqrt e)}} \right) \theta
= - \theta
.$$ The case where $\theta \leq 0$ now follows using~\eqref{H-conj}. \end{proof} Observe that $H$ is not injective on $\mathbb C$: for instance, $H(i) = {\pi}/{2} =H(-i)$. However, $H$ is injective in a (slit) neighbourhood of $0$. \begin{prop} \label{Hinj} The function $H : z \mapsto -z\ln z$ is injective on
$D_{e^{-1},\pi}$. \end{prop}
\begin{proof} Assume there exist $z_1$ and $z_2$ in $D_{e^{-1},\pi}$ such that $H(z_1) = H(z_2)$. By Lemma~\ref{argH}, the value $H(z_1)$ is not real and negative, and thus $\ln H(z_1) = \ln H(z_2)$.
This lemma also implies that for $z \in D_{e^{-1},\pi}$, we have \begin{equation}\label{lnln} \ln H(z) = \ln z + \ln(-\ln z).
\end{equation}
Hence \begin{equation} \label{lll}
|\ln z_1- \ln z_2| = |\ln(-\ln z_1)-\ln(-\ln z_2)|.
\end{equation} Let $\kappa= -\max (\ln |z_1|, \ln|z_2|)>1$. Then $-\ln z_1$ and
$-\ln z_2$ lie in $\{z\:|\:\Rea (z)\ge \kappa\}$.
This set is convex, so the (vectorial) mean value inequality gives $$
|\ln(-\ln z_1)-\ln(-\ln z_2)| \leq |\ln z_1- \ln z_2|\,\sup_{z \in
[-\ln z_1,-\ln z_2]} |\ln '(z)| \le \frac 1 \kappa |\ln z_1-
\ln z_2|. $$
Combining this with~\eqref{lll} gives $|\ln z_1- \ln z_2|=0$, so that $z_1=z_2$. \end{proof}
We now address the surjectivity of the map $H$.
\begin{prop} \label{inHD} For $0<\alpha<\pi$ and $\rho$ small enough (depending on $\alpha$), we have $$
D_{-\,\rho\ln \rho,\alpha} \subset H(D_{\rho,\pi}). $$ \end{prop}
\begin{proof} We are going to prove that the inclusion holds for every
$\rho\in(0,1/e)$ satisfying \begin{equation}\label{cond-rho}
\arctan{\left(\frac{\pi}{|\ln{\rho}|}\right)}\le \pi-\alpha. \end{equation} Let us fix a complex number $se^{i \gamma}$ with $0<s<-\rho \ln \rho$
and $|\gamma|<\alpha$. We want to prove the existence of $r<\rho$ and $\theta \in (-\pi, \pi)$ such that $H(re^{i\theta})=s e^{i \gamma}$. We proceed in two steps.
\noindent \textbf{(1) There exists a continuous function $\theta :
(0,\rho) \rightarrow (-\pi,\pi)$ such that
$\forall r \in(0,\rho)$,}
$$ \Arg H(r\,e^{i\theta(r)})=\gamma. $$
\noindent \emph{Proof\/}. Fix $r\:\in\:(0,\rho)$. For $\theta \in (-\pi,\pi)$, Lemma~\ref{argH} gives $$ f(r,\theta):=\Arg H(r e^{i\theta})=\theta + \arctan \left( \frac \theta {\ln r}\right). $$ Differentiating with respect to $\theta$ gives $$ f'_\theta(r,\theta) = 1 + \frac{1}{\left(1+\frac {\theta^2}
{\ln^2 r} \right)\ln r } \geq 1 + \frac{1}{\ln r} > 0. $$ Hence $f(r, \theta)$ is a continuous increasing function of $\theta$, sending
$(-\pi,\pi)$
onto $(-\pi - \arctan (\pi /\ln r), \pi + \arctan (\pi
/\ln r))$. Since $r<\rho$ and $\rho$ satisfies~\eqref{cond-rho}, this interval contains $(-\alpha, \alpha)$, and thus the value $\gamma$. This proves the existence, and uniqueness (since $f$ increases), of $\theta(r)$.
Now in the neighbourhood of $(r, \theta(r))$, we can apply the implicit function theorem to the equation $f(r,\theta)=\gamma$, and this shows that $\theta$ is continuous on $(0, \rho)$.
\noindent \textbf{(2)
There exists $r \in (0,\rho)$ such that $|H(r\,e^{i\theta (r)})|=s$. }
\noindent \emph{Proof\/}. The function $$
r\mapsto |H(r\,e^{i\theta(r)})|= r\sqrt{\ln^2 r+\theta(r)^2} $$ is continuous on $(0, \rho)$. It tends to $0$ as $r$ tends to $0$, and to a value at least equal to $-\rho\ln \rho$ as $r$ tends to $\rho$.
Since $\alpha <-\rho \ln \rho$, the intermediate value theorem implies that there exists
$r\in(0,\rho)$ such that $|H(re^{i\theta(r)})|=s$.
This completes the proof of the proposition. \end{proof}
\subsection{The Log-Inversion Theorem} \label{sec:H-Psi}
By combining Propositions~\ref{Hinj} and~\ref{inHD}, we see that for $\alpha \in (0, \pi)$ and $\rho$ small enough, every point of $D_{-\rho\ln \rho, \alpha}$ has a unique preimage under $H$ in $D_{\rho, \pi}$. We now want to adapt
this result to functions $\Psi$ that behave like $H$ in the
neighbourhood of the origin.
\begin{lem} \label{psinj} Let $\Psi$ be analytic on $D_{s,\pi}$ for some $s > 0$. Assume that as $z$ tends to 0 in this domain, $$ \Psi(z) \sim H(z) = -z \ln z. $$ For all $\alpha \in (0,\pi)$, there exist $\rho \in( 0,s)$ and $\rho' > 0$ such that every point of $D_{\rho',\alpha}$ has a unique preimage under $\Psi$ in $D_{\rho,\pi}$. \end{lem}
\begin{proof} By assumption,
$\Psi(z) -H(z)= o(z\ln z)=o(-|z| \ln |z|)$ as $z$ tends to 0.
Let $\rho \in( 0, s)$ be small enough for every $z \in D_{\rho,\pi}$ to satisfy \begin{eqnarray}
|\Psi(z)-H(z)| &<& - \min\left(\frac 1 2,\sin\left(\frac{\pi-\alpha}
4\right)\right) |z| \ln |z|, \label{eq-delta} \\
1 + \frac 1 {\ln |z|} &>& \frac 1 2 + \frac \alpha {\pi + \alpha}, \label{c1}
\\
|z \ln z| &\leq &- 2 |z| \ln |z|,\label{c5} \\
- \ln \frac{|z|}8 &\le& -2 \ln |z|, \label{c6} \end{eqnarray} and assume moreover than $\rho$ is also small enough for the following property to hold: \begin{equation} \label{inclusion}
D_{- \frac \rho 8 \ln \frac \rho 8,\alpha} \subset H(D_{\frac \rho 8,\pi}). \end{equation} This inclusion is made possible by Proposition~\ref{inHD}. Several of the above listed conditions can be described by an explicit upper bound on $\rho$ (for instance, \eqref{c6} just means that $\rho \le e^{-8})$, but we will use them in the above form and find convenient to write them so.
Now fix $y_0 \in D_{\rho',\alpha}$ with $\rho' = - \frac \rho 8 \ln \frac \rho 8$. We want to prove that $y_0$ has a unique preimage under $\Psi$ in $D_{\rho, \pi}$. By~\eqref{inclusion} and Proposition
\ref{Hinj}, it has a unique preimage under $H$, denoted by $z_0$, in $D_{\rho,
\pi}$ (in fact, $|z_0|< \rho/8$). We thus want to prove that the
functions $\Psi-y_0$ and
$H-y_0$ have the same number of roots in $D_{\rho, \pi}$, and we will
do so using Rouch\'e's theorem.
For $\varepsilon \in \left(0, {|z_0|}/ 8\right)$, let $\Gamma\equiv \Gamma^{(\varepsilon)}$ be the contour shown in Figure~\ref{fig:contour}. The interior of $\Gamma$ converges to $D_{\rho,
\pi}$ as $\varepsilon \rightarrow 0$. Hence for $\varepsilon$ small enough, it contains the point $z_0$, and we just need to prove that $\Psi-y_0$ and $H-y_0$ have the same number of roots inside $\Gamma^{(\varepsilon)}$ for every small enough $\varepsilon$. \begin{figure}
\caption{The contour $\Gamma^{(\varepsilon)}$.}
\label{fig:contour}
\end{figure}
By Rouch\'e's theorem, it suffices to show that $|\Psi-H| < |H - y_0|$ on $\Gamma$. Let us decompose $\Gamma$ into three (non-disjoint) parts : $$
\Gamma_1 = \Gamma \cap \{ z: |z| = \rho \},
\quad \Gamma_2 = \Gamma \cap \{z: |z| = \varepsilon \}
\quad \hbox{and} \quad \Gamma_3 = \Gamma \cap \left\{ z:|\Arg z| > \frac{\pi + \alpha} 2 \right\}.$$
We will use in the study of $\Gamma_1$ and $\Gamma_2$ the following elementary result. \begin{lem} \label{property1}
If $\rho \geq |z| \geq 8\,|z'|$ with $z, z' \in
\mathbb C\setminus(-\infty, 0]$, then $$
|z\ln z - z' \ln z'| \geq - \frac 1 2 |z| \ln |z|. $$ \end{lem} \begin{proof} We have the following lower bounds: \begin{eqnarray*}
|z\ln z - z' \ln z'| &\geq& |z \ln z| - |z' \ln z'| \\
& \geq& -|z|\ln |z| + 2 |z'| \ln |z'| \hskip 18mm \hbox{by \eqref{module-H} and \eqref{c5}}, \\
&\geq& -|z|\ln |z| + \frac {|z|} 4 \ln \frac{|z|} 8 \hskip 20mm
\hbox{because } |z'|\le |z|/8, \\
&\geq &- \frac 1 2 |z| \ln |z| \hskip 35mm \hbox{by \eqref{c6}}. \end{eqnarray*} \end{proof}
Since $|z_0|<\rho/8$, we can apply this lemma to $z\in \Gamma_1$ and $z'=z_0$. This gives, using \eqref{eq-delta}: $$
|H(z) - y_0| \geq - \frac 1 2 |z| \ln |z| > |\Psi(z)-H(z)| . $$
Since $\varepsilon < |z_0|/8$, applying Lemma~\ref{property1} to $z_0$ and $z \in \Gamma_2$ gives: $$
|y_0-H(z)| \geq - \frac 1 2 |z_0| \ln |z_0| \geq - \frac 1 2 |z| \ln
|z| > |\Psi(z)-H(z)|. $$ We are left with the contour $\Gamma_3$. If $z\in \Gamma_3$, we claim that \begin{equation}\label{arg-3}
|\Arg H(z)| \geq \alpha + \frac {\pi - \alpha} 4. \end{equation} By~\eqref{H-conj}, it suffices to prove this when $\Arg z\ge 0$. In this case, \begin{eqnarray*} \Arg H(z) &=& \Arg z +
\arctan\left(\frac{\Arg z} {\ln |z|}\right) \hskip 18mm \hbox{by \eqref{H-a}}, \\
&\geq &\left(1 +
\frac 1 {\ln {|z|}}\right) \Arg z \hskip 28mm \hbox{since } \arctan x\ge x, \\ & >& \left(\frac 1
2 + \frac \alpha {\pi + \alpha}\right) \Arg z \hskip 26mm \hbox{by \eqref{c1}}, \\ &
\geq &\alpha + \frac {\pi - \alpha} 4 \hskip 40mm \hbox{since }
\Arg z \ge \frac {\pi +\alpha} 2. \end{eqnarray*}
Hence~\eqref{arg-3} holds on $\Gamma_3$. But since $ |\Arg H(z_0)|=
|\Arg y_0|< \alpha$, we have \begin{equation}\label{arg-diff}
|\Arg H(z)-\Arg H(z_0)| > \frac {\pi - \alpha} 4. \end{equation}
We still need one more result to conclude. \begin{lem} For $\beta >0$ and complex numbers $a$ and $b$ in
$\mathbb C\setminus (-\infty,0]$, $$
|\Arg a-\Arg b|\geq \beta \ \ \Longrightarrow \ \ |a - b| \geq
|a|\sin\beta. $$ \end{lem}
\begin{proof} (1) If $|\Arg a-\Arg b| \leq \frac \pi 2$, then $$
|a-b|
= \left||a|e^{i (\Arg a-\Arg b)} - |b| \right| \geq \left|\Ima \left(|a|e^{i( \Arg a-\Arg b)}\right)\right| \geq |a|\sin\beta.$$
(2) If $|\Arg a-\Arg b| \geq \frac \pi 2$, then \begin{multline*}
|a-b|
= \left||a| - |b|e^{i( \Arg b-\Arg a)} \right| \geq
\left|\Rea \left( |a| - |b|e^{i( \Arg b-\Arg a)}
\right)\right| \\
= |a| - |b| \cos\left(\Arg b-\Arg a\right) \geq |a| \geq |a|\sin\beta. \end{multline*} \end{proof} By applying this lemma to~\eqref{arg-diff} with $\beta= (\pi-\alpha)/4$, we obtain \begin{eqnarray*}
|H(z) - y_0| &\geq & |H(z)| \sin\left(\frac{\pi-\alpha} 4\right) \\
& \geq &- \sin\left(\frac{\pi-\alpha} 4\right) |z| \ln |z| \hskip 20 mm \hbox{by \eqref{module-H}}\\
&>& |\Psi(z)-H(z)| \hskip 34mm \hbox{by \eqref{eq-delta}}. \end{eqnarray*}
We have finally proved that $|\Psi(z)-H(z)|<|H(z)-y_0|$ everywhere on the contour $\Gamma\equiv \Gamma^{(\varepsilon)}$, and we can now conclude that $\Psi-y_0$ has, like $H-y_0$, a unique root in $D_{\rho, \pi}$. \end{proof}
We are finally ready to prove the log-inversion theorem (Theorem~\ref{lninversion}).
\begin{proof}[Proof of Theorem~\ref{lninversion}] Upon writing $\Psi=c \Psi_1$ and $\Upsilon(y)=\Upsilon_1(y/c)$, we can assume without loss of generality that $c=1$. We then choose $\rho$ and $\rho'$ as in Lemma~\ref{psinj}. For $y_0 \in D_{\rho ' ,\alpha}$, we define $\Upsilon(y_0 )$ as the unique point $z_0$ of $D_{\rho, \pi}$ such that $\Psi(z_0 ) = y_0$. We now apply the analytic implicit function theorem to the equation $\Psi(\Upsilon(y)) = y$, in the neighbourhood of $(y_0 , z_0 )$. The function $\Psi$ is analytic at $z_0$ and locally injective by Lemma~\ref{psinj}. Therefore $\Psi' (z_0 ) \not = 0$, and there exists an analytic function $\bar \Upsilon$ defined in the neighbourhood of $y_0$ such that $\bar \Upsilon(y_0)= z_0$ and $\Psi (\Upsilon(y)) = y$ in this neighbourhood.
This forces $\Upsilon(y)$ and $\bar \Upsilon(y)$ to coincide in a neighbourhood of $y_0$, and implies that $\Upsilon$ is analytic at $y_0$ --- and hence in the domain $D_{\rho', \alpha}$.
Let us conclude with the singular behaviour of $\Upsilon$ near $0$. The equation $\Psi(\Upsilon(y))=y$, combined with $\Psi(z)\sim -z \ln z$, implies that $\Upsilon(y) \rightarrow 0$ as $y\rightarrow 0$. Thus $$ y \sim -\Upsilon(y)\ln(\Upsilon(y)) $$
as $y\rightarrow 0$. Upon taking logarithms, and using~\eqref{lnln}, this gives $$ \ln y \sim \ln(\Upsilon(y)) +\ln (-\ln(\Upsilon(y))) \sim \ln(\Upsilon(y)) . $$ Combining the last two equations
finally gives $\Upsilon(y) \sim -y/\ln y$. \end{proof}
\section{Asymptotics for 4-valent forested maps} \label{sec:asympt-4}
Let $F(z,u) = \sum_{n} f_n(u) z^n$ be the generating function\ of 4-valent forested
maps, given by Theorem~\ref{thm:equations}. That is, $f_n(u)$ counts
forested 4-valent maps with $n$ faces by the number of non-root
components. As recalled in Section~\ref{sec:tutte}, the polynomial
$f_n(\mu-1)$ has several interesting combinatorial descriptions in
terms of maps equipped with an additional structure, and we will
study the asymptotic behaviour of $f_n(u)$ for any $u\ge -1$.
Recall that $F(z,u)$ is characterized by~\eqref{system-simple} where $\theta$ and $\Phi$ are given by~\eqref{theta-phi-4V}. As discussed after Theorem~\ref{thm:equations}, $F(z,0)$ is explicit and given by~\eqref{Fz0}: $$
F(z,0)=\int \theta(z) dz= 4 \sum_{i \geq 2} \frac{(3i-3)!}{(i-2)!i!(i+1)!} z^{i+1}, $$ which makes the case $u=0$ of the following theorem a simple application of Stirling's formula.
\begin{theo}\label{tetra} Let $p=4$, and take $u\ge -1$. The radius of convergence of
$F(z,u)$ is \begin{equation}\label{rho} \rho_u =\tau -u\Phi(\tau)
\end{equation} where $\Phi$ is given by~\eqref{theta-phi-4V} and $$\left\{
\begin{array}{lll} \tau=1/27 & \hbox{if } u\le 0, \\
1-u\Phi'(\tau)=0 &\hbox{if } u>0. \end{array}\right. $$ The later condition determines a unique $\tau \equiv \tau_u$
in $(0,1/27)$.
In particular, $\rho_u$ is an affine function of $u$ on $[-1,0]$: \begin{equation}\label{rho-affine} \rho_u= \frac 1 {27} -u\Phi\left(\frac 1 {27}\right)= \frac {1+u} {27} - u \frac{\sqrt 3}{12\pi} . \end{equation} The function $\rho_u$ is decreasing, real-analytic everywhere except at $0$, where it is still infinitely differentiable: as $u\rightarrow 0^+$, \begin{equation}\label{rho-0} \rho_u= \frac 1 {27} -u\Phi\left(\frac 1 {27}\right)+O\left(\exp\left(- \frac{2\pi}{\sqrt 3 u}\right)\right). \end{equation}
Let $f_n(u)$ be the coefficient of $z^n$ in $F(z,u)$. There exists a
positive constant $c_u$
such that $$ f_n(u) \sim \left\{\begin{array}{lll} \displaystyle c_u\, {\rho_u^{-n}}n^{-3}(\ln n)^{-2}& \hbox{ if } u\in[-1,0),\\ \displaystyle c_u\, {\rho_u^{-n}}{n^{-3} } & \hbox{ if } u = 0,\\ \displaystyle c_u \,{\rho_u^{-n}}{n^{- 5 /2}} & \hbox{ if } u > 0. \end{array}\right. $$ The constant $c_u$ is given explicitly in Propositions~\ref{prop:4v-pos} (for $u>0$) and~\ref{prop:4v-neg} (for $u<0$), and $c_0=2/(9\sqrt 3 \pi)$.
\end{theo} The exponent $-5/2$ found for $u>0$ is standard for planar maps (see for instance Tables 1 and 2 in~\cite{banderier-maps}). The behaviour for $u<0$ is much more surprising, and, to our knowledge, it is the first time that it is observed in the world of maps. A plot of $\rho_u$ is shown in Figure~\ref{fig:rhou}. Note that $\rho_{-1}=\sqrt 3/(12 \pi)$, a transcendental radius for the series counting 4-valent maps equipped with an internally inactive spanning tree.
\begin{figure}
\caption{The radius $\rho_u$ of $F(z,u)$, as a function of $u\ge -1$.}
\label{fig:rhou}
\end{figure}
The proof of the theorem uses the \emm singularity analysis, of~\cite[Ch.~VI]{flajolet-sedgewick}. We thus need to locate the \emm dominant, singularities of the series $F'$ (that is, those of minimal modulus), and to find how $F'$ behave in their vicinity. In order to do this, we begin with the series $R$, defined
by $R=z+u\Phi(R)$, and then move to $F'=\theta(R)$. We will find that both series have the same radius $\rho_u$. Moreover, since $F'$ and $\bar u(R-z)$ have non-negative coefficients in $z$, this radius is a singularity of each (by Pringsheim's theorem). We will prove that neither $F'$ nor $R$ have other dominant singularities, and obtain estimates of these functions near $\rho_u$ (the same estimate, up to a multiplicative factor).
Now the location of $\rho_u$, and its nature as a singularity, depend on whether $u>0$ or $u<0$ (Figure~\ref{fig:R4valent}). For $u>0$, the series $R$ will be shown to satisfy the \emm smooth implicit schema, of~\cite[Sec.~VII.4]{flajolet-sedgewick}. In brief, the dominant singularity $\rho_u$ of $R$ comes from the failure of the assumption $u\Phi'(R(z))\not =1$ in the implicit function theorem. The value $R(\rho_u)$ lies in the analyticity domain of $\Phi$ and $\theta$, and the singularities of these series play no role. Both $R$ and $F'$ will be proved to have a square root dominant singularity. If $u<0$ however, the series $R$ reaches at $\rho_u$ the dominant singularity of $\Phi$ and $\theta$, and the singular behaviours of $R$ and $F'$ at $\rho_u$ depend on the singular behaviours of $\Phi$ and $\theta$. In particular, we find that, around $\rho\equiv \rho_u$, the function $F''(z,u)$ behaves like $1/\ln(1-z/\rho)$, up to a multiplicative constant.
Since this cannot be the singular behaviour of a D-finite series~\cite[p.~520
and~582]{flajolet-sedgewick}, we have the following corollary. \begin{cor} \label{cor-nonDF} For $u\in[-1,0)$, the generating function $F(z,u)$ of $4$-valent forested
maps is not D-finite. The same holds when $u$ is an
indeterminate. \end{cor} Recall that $F(z,u)$ is, however, differentially algebraic (Theorem~\ref{Dalg}).
\begin{figure}
\caption{Plot of $R(z,u)$, for $z\in[0, \rho_u]$. \emph{Left:} when
$u=1$, and more generally $u>0$, $R$ does not reach the dominant singularity of $\Phi$ (which
is $1/27\simeq 0.037$). \emph{Right:} When $u=-1/2$, and more generally when $u
\in [-1, 0]$, we have $R(\rho_u)=1/27$.}
\label{fig:R4valent}
\end{figure}
\subsection{{The series $\Phi$ and $\theta$}} \label{ssec:prelim}
Recall the definition~\eqref{theta-phi-4V} of these series. The $i$th coefficient of $\theta$ is asymptotic to $27^i/i^2$, up to a multiplicative constant, and the same holds for $\Phi$. Hence both series have radius of convergence $ 1/ {27}$, converge at this point, but their derivatives diverge.
This is as much information as we need to obtain the asymptotic behaviour of $f_n(u)$ when $u>0$. When $u<0$, we will need to know singular expansions of $\Phi$ and $\theta$ near $1/27$. Let us first observe that \begin{equation}\label{F-hg}
\Phi(x) = x \left( _2F_1\left(\frac 1 3,\frac 2 3;2;27 x\right) - 1 \right) \end{equation}
where $_2F_1(a,b;c;x)$ denotes the standard hypergeometric function
with parameters $a$, $b$ and $c$: $$
_2F_1(a,b;c;x) = \sum_{n\geq0} \frac{(a)_n(b)_n} {(c)_n} \frac{x^n}{n!}, $$ with $(a)_n$ the rising factorial $a(a+1)\cdots(a+n-1)$. The series $_2F_1\left(\frac 1 3,\frac 2 3;2;27 x\right)$ can be analytically continued in ${\mathbb C}\setminus [1/27, +\infty)$, and its behaviour as $z$ approaches $1/27$ in this domain is given by~\cite[Eq.~(15.3.11)]{AS}. Translated it terms of $\Phi$, this gives, as $\varepsilon \rightarrow 0$, \begin{equation}\label{sing-phi}
\Phi\left(\frac{1 }{27}- \varepsilon\right) =
\frac{\sqrt{3}}{12\pi} - \frac{1}{27} +
\frac{\sqrt{3}}{2\pi}\,\varepsilon\ln{\varepsilon} + \left( 1 - \frac{\sqrt 3}{2 \pi}\right) \, \varepsilon + O(\varepsilon^2
\ln{\varepsilon}). \end{equation} One also has: \begin{equation}\label{phi-prime}
\Phi'\left(\frac{1 }{27}- \varepsilon\right)=
- \frac{\sqrt{3}}{2\pi}\,\ln{\varepsilon} -1+ O(\varepsilon \ln{\varepsilon}). \end{equation} The series $\theta$ is related to $\Phi$ by~\eqref{thetrat}. It has the same analyticity domain as $\Phi$, with local expansion at $1/27$: \begin{equation}\label{sing-theta}
\theta\left(\frac{1 }{27}- \varepsilon\right) = \frac 2 3 - \frac{7 \sqrt{3}}{6 \pi} + \frac{2\sqrt{3}}{ \pi}\varepsilon \ln{\varepsilon} + \frac{7\sqrt{3}}{ \pi}\varepsilon + O(\varepsilon^2 \ln{\varepsilon}). \end{equation} Also, \begin{equation} \label{theta-prime} \theta'\left(\frac{1 }{27}- \varepsilon\right)=
-\frac{2\sqrt{3}}{ \pi}\ln{\varepsilon} - \frac{9\sqrt{3}}{ \pi} + O(\varepsilon \ln{\varepsilon}). \end{equation}
\subsection{When $u>0$} \label{ssec:pos&tetr}
As in~\cite[Def.~VI.1, p.~389]{flajolet-sedgewick}, we call \emm $\Delta$-domain of radius $\rho$, any domain of the form $$
\{z :|z|<r, z\not = \rho \hbox{ and } |\Arg (z-\rho)|> \phi\} $$ for some $r>\rho$ and $\phi \in (0, \pi/2)$. \begin{prop}\label{prop:4v-pos}
Assume $u>0$. Then the series $R(z,u)$ is aperiodic and satisfies the
smooth implicit schema of~\cite[Def.~VII.4,
p.~467]{flajolet-sedgewick}. Its radius is given
by~\eqref{rho}, and satisfies~\eqref{rho-0}. The series $R$ is analytic in a $\Delta$-domain of radius $\rho\equiv \rho_u$, with a square root singularity at $\rho$: \begin{equation}\label{R-sing-pos} R(z,u)= \tau -\gamma \sqrt{1-z/\rho } +O(1-z/\rho), \end{equation} where $\tau$ is defined as in Theorem~\ref{tetra}, and $\gamma=\sqrt{\frac{2 \rho}{u\Phi''(\tau)}}$ with $\Phi$ given by~\eqref{theta-phi-4V}.
The series $F'(z,u)$ is also analytic in a $\Delta$-domain of radius $\rho$, with a square root singularity at $\rho$: \begin{equation}\label{Fprime-sing-pos} F'(z,u)= \theta(\tau) -\gamma \theta'(\tau)\sqrt{1-z/\rho } +O(1-z/\rho), \end{equation} where $\gamma$ is given above and $\theta$ is defined by~\eqref{theta-phi-4V}. Consequently, the $n$th coefficient of $F$ satisfies, as $n\rightarrow \infty$, $$ f_n(u) \sim \theta'(\tau) \sqrt{\frac{\rho^3}{2 \pi u \Phi''(\tau)}} \rho^{-n} n^{-5/2}. $$
\end{prop} \noindent This proposition establishes the case $u>0$ of Theorem~\ref{tetra}. \begin{proof}
The results that deal with $R$ are a straightforward application of Definition~VII.4 and
Theorem~VII.3 of~\cite[p.~467-468]{flajolet-sedgewick}. Using the
notation of this book, $G(z,w)=z+u\Phi(w)$ is analytic for $(z,w)\in
{\mathbb C}\times \{|w|<1/27\}$. The so-called \emm characteristic system, holds
at $(\rho,\tau)$ where $\tau$ is the unique element of $(0,1/27)$
such that $G_w(\rho,\tau)=u\Phi'(\tau)=1$, and
$\rho:=\tau-u\Phi(\tau)$. The existence and uniqueness of $\tau$ is guaranteed by the fact
that $\Phi'(w)$ increases (strictly) from 0 to $+\infty$ as $w$ goes from 0
to $1/27$.
The aperiodicity of $R$ is obvious from the first terms of its expansion: $ R=z+3z^2u+6u(3u+5)z^3+O(z^4). $
We now move to $F'=\theta(R)$.
Since $R(\rho,u)=\tau<1/27$, and $R$ has non-negative coefficients,
there exists a $\Delta$-domain of radius $\rho$ in which $R$ is
analytic and strictly bounded (in modulus) by $1/27$. Since $\theta$
has radius $1/27$, the series $F'=\theta(R)$ is also analytic in
this domain, and its singular behaviour around $\rho$ follows from a Taylor expansion. One then applies the Transfer Theorem~VI.4 from~\cite[p.~393]{flajolet-sedgewick} to obtain the behaviour of the $n$th coefficient of $F'$, which is $(n+1)f_{n+1}(u)$. The estimate of $f_n(u)$ follows.
It remains to find an estimate of $\rho_u$ as $u\rightarrow 0^+$. Recall that $u\Phi'(\tau)=1$. Thus $\tau\equiv \tau_u$ approaches $1/27$ as $u\rightarrow 0$, and~\eqref{phi-prime} gives $$ \ln (1/27-\tau)= -\frac{2\pi(1+\bar u)}{\sqrt 3}+o(1) $$ with $\bar u=1/u$, so that \begin{equation}\label{tau-0} \tau -\frac 1 {27} \sim - \exp\left( -\frac{2\pi(1+\bar u)}{\sqrt
3}\right). \end{equation} Since $\rho=\tau -u \Phi(\tau)$, this gives~\eqref{rho-0} in view of the expansion~\eqref{sing-phi} of $\Phi$. \end{proof}
\subsection{When $u<0$}
\begin{prop}\label{prop:4v-neg}
Let $u \in [-1,0)$. The series $R$ and $F'$ have radius $\rho\equiv
\rho_u$ given by~\eqref{rho-affine}. They are analytic in a
$\Delta$-domain of radius $\rho$, and the following estimates
hold in this domain, as $z\rightarrow \rho$:
\begin{eqnarray}
R(z) - \frac 1 {27} &\sim& - \frac{2\pi\rho }{\sqrt 3 u}\, \frac{1-z/\rho}{\ln (1-z/\rho)},
\label{Rsing-neg}
\\ F''(z)+4\bar u &\sim&
\frac{72 \sqrt 3 \pi \bar u^2 \rho}{\ln (1-z/\rho)}. \label{F-sing-neg}
\end{eqnarray} Consequently, the $n$th coefficient of $F$ satisfies, as $n\rightarrow \infty$, $$ f_n(u)\sim 72 \sqrt 3 \pi \bar u^2
\frac{\rho^{-n+3}}{n^3 \ln ^2 n}. $$ \end{prop} \noindent Since~\eqref{F-sing-neg} cannot be the singular behaviour of a D-finite series~\cite[p.~520 and~582]{flajolet-sedgewick}, this proves Corollary~\ref{cor-nonDF}. This proposition also establishes the case $u<0$ of Theorem~\ref{tetra}.
\begin{proof} We begin as before with the series $R$. The equation $R=z+u\Phi(R)$ reads $\Omega(R)=z$ with $\Omega(y)=y-u\Phi(y)$. Clearly $\Omega(0)=0$ and $\Omega'(0)=1 >0$, so that we can apply Corollary~\ref{lemmeasympt}, in which the role of $Y$ is played by $R$. Let $\omega$, $\tau$ and $\rho$ be defined as in this corollary. It follows from Section~\ref{ssec:prelim} that $\omega=1/27$. Since $u<0$, $\Omega'(y)=1-u \Phi'(y)$ does not vanish on $[0, 1/27)$. Hence $\tau=1/27$ as well.
By Property (5) of Corollary~\ref{lemmeasympt}, $$ \rho = \Omega\left(\frac 1{27}\right)=\frac 1 {27} - u \Phi\left(\frac 1 {27}\right), $$ which, combined with~\eqref{sing-phi}, gives~\eqref{rho-affine}.
Corollary~\ref{lemmeasympt} tells us that $R$ has an analytic continuation along $[0, \rho)$. Moreover, $R(z)$ increases from $0$ to
$1/27$ on $[0, \rho)$, and the equation \begin{equation}\label{eqfuncR} R=z+u \Phi(R) \end{equation} holds in the whole interval $[0, \rho)$.
By Corollary~\ref{positiv}, the series $\bar u(R-z)$ has non-negative coefficients. As $R$ itself, it is analytic along $[0,\rho)$. By
Pringsheim's theorem, its radius is at least $\rho$, and this holds
for $R$ as well. We will now study the behaviour of $R$ in the neighbourhood
of $\rho$, and prove that it is singular at this point, so that $\rho$ is indeed the radius of $R$.
For $z \in \mathbb C\setminus\mathbb R^-$, let us define $$ \Psi(z) := \rho + \frac {z-1} {27} + u\,\Phi\left(\frac{1 -
z}{27}\right). $$ As explained above, $ 1-27\,R(\rho-y)$ increases from $0$ to $1$ as $y$ goes from $0$ to $\rho$, and the functional equation~\eqref{eqfuncR} satisfied by $R$ reads, for $y \in [0, \rho)$,
\begin{equation}\label{eq-R-minus} \Psi(1-27\,R(\rho-y)) = y. \end{equation} By~\eqref{sing-phi}, we have $\Psi(z) \sim - c z \ln z$ where $$c=-\frac{\sqrt 3 u}{54 \pi}>0.$$ Let us apply the log-inversion theorem (Theorem~\ref{lninversion}) to $\Psi$, with $\alpha=3\pi/4$ (we now denote $r$ and $r'$ the numbers $\rho$ and $\rho'$ of Theorem~\ref{lninversion}): There exists $r>0$ and $r'>0$, and a function $\Upsilon$ analytic on $D_{r',
\alpha}=\left\{|z| < r' \textrm{ and
} |\Arg z | < 3 \pi/4 \right\}$, such that $\Psi (\Upsilon(y)) = y$. Furthermore, $\Upsilon(y)$ is the only preimage of $y$ under $\Psi$
that can be found in $D_{r, \pi}=\left\{|z| < r \textrm{ and
} |\Arg z| < \pi \right\} $. Comparing with~\eqref{eq-R-minus} shows that for $y$ small enough and positive, one has
$\Upsilon(y) = 1-27\,R(\rho-y)$. Returning to the original variables, this means that, for $z$ real and close to $\rho^-$, $$ R(z)=\frac 1 {27} \left( 1-\Upsilon(\rho-z)\right), $$
so that $R$ can be analytically continued on $\left\{|z-\rho| < r \textrm{ and }
|\Arg(z-\rho)\ | > \pi/4\right\}$. Moreover, the final statement of Theorem~\ref{lninversion} gives~\eqref{Rsing-neg}. This shows that $R$ is singular at $\rho$, which is thus the radius of $R$.
In order to prove that $R$ is analytic in a $\Delta$-domain of radius $\rho$, we now have to prove that it has no singularity other than $\rho$ on its circle of convergence. So let $\mu \neq \rho$ have modulus $ \rho$. Since $\mathcal{R}:=\bar u(R-z)$ has positive coefficients and
$|\mathcal{R}(\rho)| < + \infty$, the series $\mathcal{R}$ converges at $\mu$, and so does $R$. Recall that $\Phi$ is analytic in $\mathbb C\setminus[1/27,
+\infty)$. Hence~\eqref{eqfuncR}, which holds in a neighbourhood of
$0$, will hold in the closed disk of center $\rho$ if we can prove the following lemma.
\begin{lem} For $|z|\le \rho$ and $z\not = \rho$, we have $R(z) \not
\in [1/27, +\infty)$. \end{lem}
\begin{proof} We have already seen that the property holds (since $R$
is increasing) on the interval $[0, \rho)$. On the interval $[-\rho,
0]$, the function $R$ is real (Lemma~\ref{lem:real}) and
continuous. Hence, if $R$ exits $(-\infty, 1/27)$ on this
interval, there exists $t \in [-\rho, 0]$ such that
$R(t)=1/27$. Let $t$ be maximal for this property. Then $R(z)\in
\mathbb C\setminus [1/27, +\infty)$ on a complex neighbourhood of
$(t,0]$, and~\eqref{eqfuncR} holds there. By differentiating
it, we obtain \begin{equation}\label{R-prime} R'(z)= \frac 1 {1-u\Phi'(R(z))} \not = 0. \end{equation} In particular, $R'(0)=1$. But since $R(t)=1/27> R(0)=0$, the function $R'(z)$ must vanish in $(t,0)$, which is impossible in view of its expression above.
Assume now that $z$ is
not real, and let us prove that $R(z)$ is not real either. First,
\begin{equation}\label{tata}
|\Ima R(z)|
= |\Ima (z+u\mathcal{R}(z))|
\geq |\Ima z| +u\,|\Ima \mathcal{R}(z)|. \end{equation} Then: \begin{multline} \label{toto}
|\Ima \mathcal{R}(z)|
=|\Ima \left(\mathcal{R}(z)-\mathcal{R}(\Rea z)\right)|
\leq |\mathcal{R}(z) - \mathcal{R}(\Rea z)| \\
< |z - \Rea z|\max_{y \in [\Rea z,z]}|\mathcal{R}'(y)| \leq
|\Ima z|\max_{|y| \leq \rho}|\mathcal{R}'(y)|. \end{multline}
The strict inequality comes from the fact that $\mathcal{R}'$ is not constant over $[\Rea z,z]$. But $\mathcal{R}'$ is a power series with positive coefficients, and thus for $|y| \leq \rho$, \begin{equation}\label{Rc-prime}
|\mathcal{R}'(y)| \leq \mathcal{R}'(\rho) = \bar u \left(R'(\rho) - 1\right) = \bar u \left(\lim_{t \rightarrow \rho} \frac 1 {1 - u \Phi'(R(t))}- 1\right) = -\bar u,
\end{equation} because $\Phi'(z)$ tends to $+\infty$ as $z\rightarrow 1/27$. Returning to~\eqref{toto} gives $ |\Ima \mathcal{R}(z)|
< -\bar u |\Ima z|$, and this inequality, combined with~\eqref{tata}, gives
$|\Ima R(z)| > 0$.
\end{proof} So we now know that~\eqref{eqfuncR} holds everywhere in the disk of radius $\rho$, with $R$ only reaching the critical value $1/27$ at $\rho$. By differentiation, \eqref{R-prime} holds as well. Let us return to our point $\mu\not =\rho$, of modulus $\rho$. We now want to apply the analytic implicit function theorem to~\eqref{eqfuncR} at the point $(\mu, R(\mu))$. We know that $\Phi$ is analytic around $R(\mu)$. Could it be that $u\Phi'(R(\mu))
=1$? By~\eqref{R-prime}, this would imply that $|R'(z)|$, and thus $|\mathcal{R}'(z)|$, is not bounded as $z$ approaches $\mu$ in the disk. However, $\mathcal{R}'$ has non-negative coefficients and $\mathcal{R}'(\rho)$ has been shown to converge (see~\eqref{Rc-prime}). Thus $\mathcal{R}'(z)$ remains bounded in the disk of radius $\rho$, and in particular $u\Phi'(R(\mu)) \not=1$. The analytic implicit function theorem then implies that $R$ is analytic at $\mu$.
In conclusion, we have proved that there exists a $\Delta$-domain of radius $\rho$ where $R$ is analytic and avoids the half-line $[1/27,+\infty)$.
\begin{comment}
Thus $R$ is $\Delta$-analytic and singularity analysis gives us \eqref{rnt}: \begin{equation*}
r_n(u) \sim -C \frac{\rho^{-n}}{n^2\ln(n)^2}
\end{equation} where \begin{equation}
C = -\frac{2} {3\,u} \pi \sqrt{3}\, \rho. \end{equation} \end{comment}
Let us now turn our attention to $F'=\theta(R)$. Since $\theta$ is analytic in $\mathbb C\setminus[1/ {27},+\infty)$, the
series $F'$ is analytic in the same $\Delta$-domain as $R$. The estimate~\eqref{Rsing-neg} of $R$, combined with the expansion~\eqref{sing-theta} of $\theta$, does not give immediately the singular behaviour of $F'$. Another route would be possible, but it is more direct to work with $F''$ instead. Indeed, \begin{equation}\label{Fzz} F''(z)= R'(z) \theta'(R(z)) = \frac{\theta'(R(z))}{1-u\Phi'(R(z))}. \end{equation} By~\eqref{phi-prime} and~\eqref{theta-prime}, \begin{eqnarray*}
\frac{\theta'(1/27-\varepsilon)}{1-u\Phi'(1/27-\varepsilon)}&=& -4\bar u - {2}\bar u\left( {9} -\frac{4\pi(1+\bar u)}{\sqrt 3}\right) \frac 1{\ln \varepsilon}+ O (1/{\ln^2 \varepsilon})\\ &=& -4\bar u + \frac{72 \sqrt 3 \pi \bar u^2 \rho} {\ln \varepsilon}+ O (1/{\ln^2 \varepsilon}), \end{eqnarray*} in view of~\eqref{rho-affine}. This, combined with~\eqref{Fzz} and the estimate~\eqref{Rsing-neg} of $R(z)$, gives~\eqref{F-sing-neg}. One finally applies the Transfer Theorem~VI.4 from~\cite[p.~393]{flajolet-sedgewick} to obtain the behaviour of the $n$th coefficient of $F''$, which is $(n+2)(n+1)f_{n+2}(u)$. The estimate of $f_n(u)$ follows. \end{proof}
\section{Large random maps equipped with a forest or a tree} \label{sec:random}
We still focus in this section on 4-valent maps, equipped either with a spanning forest or with a spanning tree. In each case, we define a Boltzmann probability distribution on maps of size $n$, which involves a parameter $u$ and takes into account the number of components of the spanning forest, or the number of internally active edges of the spanning tree (equivalently, the level of a recurrent sandpile configuration, as explained in Section~\ref{sec:tutte}). We observe on several random variables the effect of the phase transition found at $u=0$ in the previous section.
\subsection{Forested maps: Number and size of components}
Fix $n \in \mathbb N$ and $u \in [0,+\infty)$. Consider the following probability distribution on 4-valent forested maps $(M,F)$ having $n$ faces : $$ \mathbb P_c(M,F) = \frac {u^{c(F)-1}}{f_n(u)}, $$
where $c(F)$ is the number of components of $F$, and $f_n(u)$ counts
4-valent forested maps by the number of non-root components. Under this
distribution, let $C_n$ be the number of components of $F$, and $S_n$
the size (number of vertices) of the root component. When $u=0$, only tree-rooted maps have a positive probability, $C_n=1$ and $S_n=n-2$, the total number of vertices in the map. Let us examine how this changes when $u>0$. \begin{prop} \label{random-forest} Assume $u>0$. Under the distribution $\mathbb P_c$, we have, as $n\rightarrow \infty$: $$ \mathbb{E}_c(C_n) \sim \frac {u\Phi(\tau)}{\tau -u\Phi(\tau)}\, n, $$ where $\Phi$ is given by~\eqref{theta-phi-4V} and $\tau\equiv \tau_u$ is the unique solution in $\left(0,1/{27}\right)$ of $u\Phi'(y)=1$.
The size $S_n$ of the root component admits a discrete limit law: for $k\ge 1$, \begin{equation} \label{PSn} \lim_{n \rightarrow + \infty} \mathbb P_c(S_n=k) = \frac{4\,(3\,k)!\,}{(k-1)!\,k!\,(k+1)!}\frac{ \tau^k}{\theta'(\tau)} \end{equation} with $\theta$ defined by~\eqref{theta-phi-4V}. \end{prop}
\begin{proof} We have \begin{equation}\label{ECn} \mathbb{E}_c(C_n-1)= \sum_{(M,F)} \left(c(F)-1\right) \frac {u^{c(F)-1}}{f_n(u)}=u\frac{f'_n(u) }{f_n(u)}= u\frac{[z^{n-1}]F''_{zu}(z,u) }
{[z^{n-1}]F'_{z}(z,u) }. \end{equation} It follows from the definition~\eqref{system-simple} of $R$ and $F$ that \begin{equation}\label{Fzu}
F''_{zu}(z,u)= \frac{\Phi(R) \theta'(R)}{1-u\Phi'(R)}. \end{equation} We now use singularity analysis. The functions $\Phi$ and $\theta$ are analytic at $\tau=R(\rho,u)$, the number $\tau$ satisfies $1=u\Phi'(\tau)$, and a singular estimate of $R-\tau$ is given by~\eqref{R-sing-pos}. This gives, as $z\rightarrow \rho$, $$ F''_{zu}(z,u)\sim \frac{\Phi(\tau) \theta'(\tau)}{u\Phi''(\tau) \gamma
\sqrt{1-z/\rho}} $$ where $\gamma$ is as in Proposition~\ref{prop:4v-pos}. An estimate of $F'_z(z,u)$ is given by~\eqref{Fprime-sing-pos}. Our estimate of $\mathbb{E}_c(C_n)$ then follows from a transfer theorem, and the fact that $\rho=\tau-u\Phi(\tau)$.
To study $S_n$, we add to our generating function\ $F(z,u)$ a weight $x$ for each vertex belonging to the root component. Lemma~\ref{lem:cont} becomes $$
F (z,u,x)= \bar M (z,u;\,0,0,0,t_4,0, t_6, \ldots; 0, 0, 0, x\,t^c_4,0, x^2\,t^c_6, \ldots). $$ (Recall that $t_2 = t_{2k+1} =t^c_2 = t^c_{2k+1} = 0$ for every $k \geq 0$ when $p=4$.) Thanks to~\eqref{relM}, the first equation of~\eqref{system-simple} becomes $$
xF'_z(z,u,x) = \theta(x\,R), $$ where $R=R(z,u)$ is as before. We can express $\mathbb P_c(S_n=k)$ is terms of $F'_z$: $$ \mathbb P_c(S_n=k)= \frac{[z^{n-1}x^k] F'_z(z,u,x)}{[z^{n-1}] F'_z(z,u,1)}= \frac{[z^{n-1}x^{k+1}] \theta(x\,R)}{[z^{n-1}] \theta(R)}. $$
We can now
apply Proposition~IX.1 from~\cite[p.~629]{flajolet-sedgewick}. Proposition~\ref{prop:4v-pos}
guarantees that its hypotheses are indeed satisfied, and this
gives~\eqref{PSn}, using the expression~\eqref{theta-phi-4V} of $\theta$. \end{proof}
\subsection{Tree-rooted maps: Number of internally active edges}
Fix $n \in \mathbb N$ and $u \in [-1,+\infty)$. Consider the following probability distribution on 4-valent tree-rooted maps $(M,T)$ having $n$ faces : $$ \mathbb P_i(M,T) = \frac {(u+1)^{i(M,T)}}{f_n(u)}, $$ where $i(M,T)$ is the number of internally active edges in $(M,T)$. Eq.~\eqref{F-act} shows that this is indeed a probability distribution. Under this
distribution, let $I_n$ denote the number of internally active
edges. As shown by~\eqref{F-sandpile}, $I_n$ can also be
described as the level $\ell(C)$ of a recurrent sandpile configuration $C$
of an $n$-vertex quadrangulation $M$, drawn according to the distribution $$ \mathbb P_s(M,C) = \frac {(u+1)^{\ell(C)}}{f_n(u)}. $$ \begin{prop}
The expected number of internally active edges undergoes a (very
smooth) phase
transition at $u=0$: as $n\rightarrow \infty$, \begin{equation}\label{EIn} \mathbb E_i(I_n) \sim \kappa_u \, n, \end{equation} with $$
\kappa_u = \frac {(1+u)\Phi(\tau)}{\tau -u\Phi(\tau)}
$$ where $\Phi$ is given by~\eqref{theta-phi-4V} and $\tau\equiv \tau_u$ is defined in Proposition~\ref{tetra}.
The function $\kappa_u$ is real-analytic everywhere except at $0$, where it is still infinitely differentiable: as $u\rightarrow 0^+$, $$ \kappa_u = \frac {(1+u)\Phi(1/27)}{1/27 -u\Phi(1/27)}+ O \left(\exp\left(- \frac{2\pi}{\sqrt 3 u}\right) \right). $$ \end{prop}
\begin{proof} We have \begin{equation}\label{EIn-expr} \mathbb E_i(I_n) = \sum_{(M,T)}i(M,T) \frac {(u+1)^{i(M,T)}}{f_n(u)} = (u+1) \frac{f'_n(u)}{f_n(u)} = (u+1) \frac{[z^{n-1}] F''_{zu}(z,u)}{[z^{n-1}] F'_{z}(z,u)}. \end{equation} Comparing with~\eqref{ECn}, we see that for $u>0$, we have $ \mathbb{E}_i(I_n)=(1+\bar u) \mathbb{E}_c(C_n) $. Thus~\eqref{EIn} follows from Proposition~\ref{random-forest} when $u>0$. The expansion of $\kappa_u$ near $0^+$ follows from the estimate~\eqref{tau-0} of $\tau$ and the expansion~\eqref{sing-phi} of~$\Phi$.
Let us now take $u \in [-1,0)$. The series $F''_{zu}$ is still given
by~\eqref{Fzu}, which can also be written $\Phi(R) F''_{zz}$
(by~\eqref{Fzz}), or $\bar u(R-z)F''_{zz}$. In view of the estimates~\eqref{Rsing-neg}
and~\eqref{F-sing-neg} of $R$ and $F''_{zz}$, we find $$ [z^{n-1}]F''_{zu}(z,u) \sim \bar u(1/27-\rho) [z^{n-1}] F''_{zz}(z,u). $$ Returning to~\eqref{EIn-expr} gives~\eqref{EIn} by singularity analysis, since $\rho=1/27-u\Phi(1/27)$.
When $u=0$, we have $R=z$. Hence~\eqref{Fzu} reads $F''_{zu}(z,0)=\Phi(z) \theta'(z)$, while $F'_z(z,0)= \theta(z)$. As
above, \eqref{EIn} follows from~\eqref{EIn-expr} by singularity
analysis, using~\eqref{sing-phi},~\eqref{sing-theta} and~\eqref{theta-prime}. \end{proof}
\section{Asymptotics for cubic forested maps} \label{sec:asympt-3}
We study in this section the singular behaviour of the series $F(z,u)$ that counts cubic forested maps by the number of components, and the asymptotic behaviour of its $n$th coefficient $f_n(u)$. As expected, we observe a ``universality'' phenomenon: our results are qualitatively the same as for 4-valent maps (Theorem~\ref{tetra}). However, the cubic case is more difficult since we now have to deal with a pair of equations: $$ R=z+u \Phi_1(R,S), \quad S=u\Phi_2(R,S), $$ where $\Phi_1$ and $\Phi_2$ are given by~\eqref{phi1} and~\eqref{phi2}. Our results are less complete than in the 4-valent case: when $u<0$, we only determine the singular behaviour of $F'(z,u)$ as $z $ approaches the radius of $F'$ on the real axis. We do not know if $F'$ has dominant singularities other than its radius. Consequently, we have not obtained the asymptotic behaviour of $f_n(u)$ when $u<0$.
\begin{theo}\label{cubique} Let $p=3$, and take $u\ge -1$. The radius
of convergence of $F(z,u)$ reads
$$
\rho_u=\tau -u \Phi_1(\tau, \sigma) $$ where the pair $(\tau, \sigma)$ satisfies $$ \sigma= u \Phi_2(\tau, \sigma) $$ and $$\left\{ \begin{array}{lll}
\displaystyle64 \tau = (1-4\sigma)^2 & \hbox{if } u\le 0, \\ \\ \displaystyle \left(1-u\Phi_1^{x}(\tau, \sigma)\right) \left(1-u\Phi_2^{y}(\tau, \sigma)\right) = u^2 \Phi_1^{y}(\tau,\sigma)\Phi_2^{x}(\tau, \sigma) & \hbox{if } u>0. \end{array}\right. $$
The series $\Phi_1$ and $\Phi_2$ are given by~\eqref{phi1} and~\eqref{phi2}, and $\Phi_i^{x}$ (resp. $\Phi_i^{y}$) denotes the derivative of $\Phi_i$ with respect to its first (resp. second) variable. \\ In particular, $\rho_u$ is an algebraic function of $u$ on $[-1,0]$: \begin{equation}\label{rho-3} \rho_u= \frac{3(1-u^2)^2\pi^4+96u^2\pi^2(1-u^2)+512u^4+ 16u\sqrt2 \left( \pi^2(1-u^2)+8u^2\right)^{3/2}}{192\pi^4(1+u)^3}. \end{equation}
Let $f_n(u)$ be the coefficient in $z^n$ in $F(z,u)$. There exists
a positive constant $c_u$ such that $$ f_n(u) \sim \left\{\begin{array}{lll}
\displaystyle c_u\, {\rho_u^{-n}}{n^{-3} } & \hbox{ if } u = 0,\\ \displaystyle c_u \,{\rho_u^{-n}}{n^{- 5 /2}} & \hbox{ if } u > 0. \end{array}\right. $$
For $u \in [-1,0]$, the series $F'(z)\equiv F'(z,u)$ has the following singular expansion as $z\rightarrow \rho_u^-$: \begin{equation}\label{exp-3} F'(z) = F'(\rho_u) + \alpha (\rho_u-z) + \beta\, \frac{ \rho_u-z}{\ln(\rho_u-z)}\left(1+o(1)\right), \end{equation} where $$ \beta =\frac{4u-3\sqrt2 \sqrt{\pi^2(1-u^2)+8u^2}}{2u^2} <0. $$ \end{theo}
\noindent{\bf Remarks} \\ 1. As in the 4-valent case, the singular behaviour of $F'$ obtained when $u<0$ is incompatible with D-finiteness~\cite[p.~520
and~582]{flajolet-sedgewick}.
\begin{cor} For $u\in [-1,0)$, the generating function\ $F(z,u)$ of cubic forested
maps is not D-finite. The same holds when $u$ is an indeterminate. \end{cor}
\noindent 2. The series $F(z,0)$ has a simple explicit expression given by~\eqref{Fz0}: $$ F(z,0)=3 \sum_{\ell \ge 1} \frac{(4\ell)!}{(2\ell-1)! (\ell+1)!
(\ell+2)! }z^{\ell+2}. $$ The above theorem follows in this case from Stirling's formula. One has $\sigma=0$ and $\rho_0=\tau=1/64$. We will thus focus below on the cases $u>0$ and $u<0$.
\noindent 3. At $u=-1$, one finds $\rho_{-1}= \pi^2/384$, a beautiful transcendental radius of convergence for the series counting cubic maps equipped with an internally inactive spanning tree.
\subsection{The series $\Phi_1$, $\Phi_2$, $\Psi_1$ and $\Psi_2$} \label{sec:pp}
We have performed in Section~\ref{sec:cubic-de} a useful reduction by showing that the bivariate series $\Phi_1(x,y)$ and $\Phi_2(x,y)$ can be expressed in terms of the univariate hypergeometric series $\Psi_1$ and $\Psi_2$ (see~(\ref{pp1}--\ref{pp2})).
The $i$th coefficient of $\Psi_1$ is asymptotic to $64^i/i^2$, up to a multiplicative constant, and the same holds for $\Psi_2$. Hence both series have radius of convergence $ 1/ {64}$, converge at this point, but their derivatives diverge. In fact, $$ \Psi_1(z)/z= \, _2F_1(1/4, 3/4;2;64z), $$ so that $\Psi_1$ can be analytically defined on $\mathbb C \setminus[1/64, +\infty)$. The
same holds for $\Psi_2(z)$ in view of~\eqref{ed-cubic}. It follows
from~\cite[Eq.~(15.3.11)]{AS} that, as $\vareps\rightarrow 0$ in
$\mathbb C\setminus \mathbb R^-$, \begin{equation}\label{exp-psi1} \Psi_1\left(\frac{1} {64}-\varepsilon\right) = {\frac {\sqrt {2}}{24\, \pi }} + {\frac {\sqrt {2} }{2\, \pi }}\, \varepsilon \,\ln \varepsilon - {\frac {\sqrt {2} }{2\, \pi }} \varepsilon + O \left( {\varepsilon}^{2} \ln \varepsilon \right). \end{equation} By~\eqref{ed-cubic}, we also have \begin{equation} \label{exp-psi2} \Psi_2\left(\frac{1} {64}-\varepsilon\right) = \frac 1 2 -{\frac {\sqrt {2}}{\pi }} + {\frac {4\sqrt {2} }{ \pi }}\, \varepsilon\,\ln
\varepsilon + {\frac {12\,\sqrt {2} \, }{\pi }} \varepsilon + O \left( {\varepsilon}^{2} \ln \varepsilon \right). \end{equation} Let us now return to $\Phi_1$ and $\Phi_2$. The series $\sqrt{1-4y}$ has radius $1/4$, the series $\Psi_1$ and $\Psi_2$ have radius $1/64$, and thus
$\Phi_1(x,y)$ and $\Phi_2(x,y)$ converge absolutely for $|y|<1/4
$ and $64|x|< (1-4|y|)^2$ (Figure~\ref{fig:domaine}, left). The expressions~\eqref{pp1} and~\eqref{pp2} show that $\Phi_1$ and $\Phi_2$ have an analytic continuation for $y \in \mathbb C\setminus [1/4,
+\infty)$ and $x/(1-4y)^2 \in \mathbb C\setminus [1/64, +\infty)$
(Figure~\ref{fig:domaine}, right). As $\Psi_1'(t)$ and
$\Psi_2'(t)$ tend to $+\infty$ when $t\rightarrow 1/64$, there is
no way to extended analytically $\Phi_1$ or $\Phi_2$ at a point of
the \emm critical parabola, $\{64x =(1-4y)^2\}$.
\begin{figure}
\caption{\emph{Left:} The domain of absolute convergence of the series $\Phi_1$ and
$\Phi_2$, in the real plane. \emph{Right:} A domain where an
analytic continuation exists. No analytic continuation exists at
a point of the parabola.}
\label{fig:domaine}
\end{figure}
\subsection{When $u>0$}
\begin{prop}
Assume $u>0$. The series $R$, $S$ and $F'$ have the same radius of
convergence, denoted $\rho_u$, which satisfies the conditions stated in
Theorem~{\rm\ref{cubique}}. The three series are analytic in a
$\Delta$-domain of radius $\rho_u$, with a square root singularity at
$\rho_u$. In particular, $$ f_n(u) \sim c_u \rho_u^{-n} n^{-5/2} $$ for some positive constant $c_u$. \end{prop} \begin{proof}
Recall that these three series are defined by the system~\eqref{F-cubic},~\eqref{phi1enpsi},~\eqref{phi2enpsi}. The analysis of systems of functional equations can be a tricky exercise, even in the \emm positive case,\footnote{By this, we mean a system
given by equations of the form $R_i= F_i(R_1, \ldots, R_m)$ where
the series $F_i$ have non-negative coefficients.} and with 2 equations only. In particular, the connection between the location of the radius and the solution(s) of the so-called \emm characteristic system, is subtle (see~\cite{drmota-systems,burris}). In our case however, the equation that defines $S$ does not involve the variable $z$ explicitely, and this allows us to proceed safely in two steps. As in Section~\ref{sec:enriched}, we first define $\tilde S\equiv \tilde S(z,u)$ as the unique power series in $z$ satisfying $\tilde S (0,u)=0$ and \begin{eqnarray}
\tilde S &=& u\,\Phi_2(z,\tilde S) \label{S-t-phi}\\ & =& u \sqrt{1-4\tilde S}\ \Psi_2\left( \frac z {(1-4\tilde S)^2}\right)+ \frac u 4 \left( 1-\sqrt{1-4\tilde
S}\right)^2. \label{S-t} \end{eqnarray} We will first study $\tilde S$, and then move to $R$, which is now defined by the following equation: \begin{eqnarray} R&=&z+u \Phi_1(R, \tilde S(R)) \label{R-t}\\ &=&z+u (1-4\tilde S(R))^{3/2} \Psi_1 \left( \frac{R}{(1-4\tilde
S(R))^2}\right) -uR, \label{R-t-Psi} \end{eqnarray} where we have denoted for short $\tilde S(z)=\tilde S(z,u)$. Of course, $S=\tilde S(R)$.
So let us begin with $\tilde S$. One can prove that~\eqref{S-t-phi} fits in the smooth implicit function schema of~\cite[Def.~VII.4]{flajolet-sedgewick}, but we can actually content ourselves with an application of Proposition~\ref{lemmeasympt2}, where $\tilde S$ plays the role of $Y$. The series $H(x,y)= y-u\Phi_2(x,y)$ satisfies the assumptions of this proposition. Define $\tilde \rho$ as in the proposition. Since $\tilde S$ has non-negative coefficients, the points $(z, \tilde S(z)) $ form, as $z$ goes from $0$ to $\tilde \rho$, an increasing curve starting from $(0,0)$ in the plane $\mathbb R^ 2$. Condition (b), together with the properties of $\Phi_2$ described in Section~\ref{sec:pp}, implies that this curve cannot go beyond the parabola $64x=(1-4y) ^ 2$.
This rules out the possibilities (i) and (iv). Now $H'_ y(x,y)=1-u\Phi'_2(x,y)$ approaches $-\infty$ as $(x,y)$ approach the parabola, and thus Condition (d) rules out the possibility (iii). The curve $(z, \tilde S(z))$ thus ends (at $z=\tilde \rho$) before reaching the parabola. Moreover (ii) holds: $H'_y(\tilde \rho, \tilde S(\tilde \rho))=0$, or equivalently, \begin{equation}\label{char-Stilde-positif} 1= u\pd{ \Phi_2} y (\tilde \rho,\tilde S(\tilde \rho)). \end{equation} (The $\liminf$ of (ii) becomes here a true limit because of the positivity of the coefficients of $\Phi_2$ and $\tilde S$.) By (a), the radius of $\tilde S$ is at least $\tilde \rho$. Finally, it follows from~\eqref{S-t-phi} that for $z \in [0, \tilde \rho)$,
\begin{equation}\label{Sprime} \tilde S'(z) = u \, \frac {\pd {\Phi_2} x (z,\tilde S(z))} {1- u\, \pd
{\Phi_2} y (z,\tilde S(z))}. \end{equation} By~\eqref{char-Stilde-positif}, this derivative tends to $ + \infty$ as $z\rightarrow \tilde \rho$. Hence $\tilde S$ has radius $\tilde \rho$. Figure~\ref{fig:Stilde-pos} (left) illustrates the behaviour of $\tilde S$ on $[0, \tilde \rho]$.
\begin{figure}
\caption{\emph{Left:} Plot of $\tilde S(z)$ for $u=1$ and $z\in[0,
\tilde \rho]$. The points $(z, \tilde S(z))$ remain below the parabola
$64z=(1-4\tilde S)^2$.
The plot was obtained using
the expansion of $\tilde S(z)$ at order 80 (this is why the divergence of $\tilde
S'$ at $\tilde \rho$ is not very clear on the picture), and the
estimate $\tilde \rho\simeq 0.01032$. \emph{Right:} Plot of $(R(z), S(z))$ for $u=1$ and $z\in[0, \rho]$, with $\rho\simeq 0.0098$. This curve follows the plot of $\tilde S$, but stops at the point $(R(\rho),S(\rho))$, for which $R(\rho)<\tilde \rho$.}
\label{fig:Stilde-pos}
\end{figure}
\begin{comment}
So let us begin with $\tilde S$. We claim that~\eqref{S-t} fits in the smooth implicit function schema of~\cite[Def.~VII.4,
p.~467-468]{flajolet-sedgewick}. Using the
notation of this book, $G(z,w)=u\Phi_2(z,w)$ is analytic for
$64|z|<|1-4w|^2$ (we will describe later a \emm rectangular, domain
of analyticity
containing the solution of the characteristic system, to comply with
the conditions of Definition~VII.4). Using~\eqref{S-t}, write the
characteristic system $\{s=G(r,s), 1=G'_w(r,s)\}$ as a system in $t:=r/(1-4s)^2$ and
$\delta:=\sqrt{1-4s}$. By eliminating $\delta$ between the two
equations, one finds that $t$ must satisfy: \begin{equation}\label{car1} 1=u^2\left( 4\Psi_2(t)-4\Psi_2^2(t)+64t^2 \Psi'^2_2(t)\right). \end{equation} The right-hand side of this equation, seen as a function of $t$, increases from $0$ to $+\infty$ as $t$ goes from $0$ to $1/4$. Indeed, the derivative can be written as (using~\eqref{ed-cubic}): $$ 4u ^2\frac {1-2\Psi_2(t)}{1-64 t} \Psi'_2(t), $$ and $\Psi(t) \le \Psi(1/64)=1/2-\sqrt 2/\pi<1/2$. This guarantees the existence of a unique $t\equiv t\in [0, 1/64)$
satisfying~\eqref{car1}. Now the equation $G'_w(r,s)=1$ reads \begin{equation}\label{car2} \sqrt{1-4s}= \frac u{1+u} \left( 1-2\Psi_2(t)+8r\Psi'_2(t)\right), \end{equation} If we can prove that the right-hand side belongs to $(0,1]$, then we are done:~\eqref{car2} defines a unique value $s$ in $[0,1/4)$, the equation $s=G(r,s)$ is a consequence of~\eqref{car1}
and~\eqref{car2} and we have found a solution, namely $(t(1-4s)^2,s)$, to the
characteristic system. Clearly, since $|s|<1/4$ and $|t|<1/64$, we
can find a rectangle containing $(0,0)$ and the point
$(t(1-4s)^2,s)$ that is included in the domain of convergence of
$\Phi_2$. All properties of~\cite[Def.~VII.4,
p.~467-468]{flajolet-sedgewick} are now satisfied. It then follows
from Theorem VII.3 of~\cite{flajolet-sedgewick} that $\tilde S$ has radius $r= t(1-4s)^2$. Moreover, $\tilde S(r)=s<1/4$, and for
$z \in [0, r]$, $$ \frac z{(1-4\tilde S(z))^2} \le \frac {r}{(1-4\tilde
S(r))^2}= t <\frac 1{64}.$$ Finally, it follows from~\eqref{S-t-phi} that
$$ \tilde S'(z) = u \, \frac {\pd {\Phi_2} x (z,\tilde S(z))} {1- u\, \pd
{\Phi_2} x (z,\tilde S(z))}. $$ This quantity tends to $ + \infty$ as $z\rightarrow r$, because the equation $1=G'_w(r,s)$ reads $1= u \pd
{\Phi_2} x (r,s)$.
We do not need to know the nature of the singularity of $\tilde S$ at $r$. \end{comment}
Let us now consider the equation~\eqref{R-t} that defines $R$, and prove that it fits in the smooth implicit function schema of~\cite[Def.~VII.4,
p.~467-468]{flajolet-sedgewick}. With the notation of this definition, $G(z,w)= z+u
\Phi_1(w, \tilde S(w))$. The properties of $\tilde S$ established above show that $G$ is analytic in $\mathbb C\times \{w: |w|<\tilde \rho\}$.
The characteristic equation $1=G'_w(\rho,\tau)$ does not involve $\rho$ and reads
\begin{equation}\label{cubic-char}
1= u\, \left(\pd {\Phi_1} x (\tau ,\tilde S(\tau )) + \,\tilde S'(\tau )\, \pd
{\Phi_1} y (\tau ,\tilde S(\tau ))\right). \end{equation} The right-hand side of this equation increases from 0 to $+\infty$ as $\tau$ goes from $0$ to $r$ (because, as observed above, $\tilde S'(\tilde \rho)=+\infty$). Hence~\eqref{cubic-char} determines a unique value of $\tau$ in $(0, \tilde \rho)$. The equation $\tau = G(\rho, \tau)$ gives the value of $\rho$: \begin{equation}\label{rho-val} \rho=\tau -u \Phi_1(\tau,\tilde S(\tau)). \end{equation} Let $\sigma=\tilde S(\tau)$. The combination of~\eqref{rho-val}, \eqref{S-t-phi}, \eqref{cubic-char} and~\eqref{Sprime} proves the properties of $\rho, \tau$ and $\sigma$ stated in Theorem~\ref{cubique}.
The rest of the argument is analogous to the end of the proof of Proposition~\ref{prop:4v-pos}. First, $R$ is irreducible as shown by the first terms of its expansion at $0$: $$ R=z+2u(2u+3)z^2+4u(42u^2+63u+10u^3+35)z^3+ O(z^4). $$
By Theorem~VII.3 of~\cite[p.~468]{flajolet-sedgewick}, it has radius $\rho$, and is analytic in a $\Delta$-domain of radius $\rho$. It takes the value $\tau$ at $\rho$, with a square root singularity there. By composition with the series $\tilde S$, which has radius $\tilde \rho >\tau$, the same properties hold for $S=\tilde S(R)$, and finally for the series $F'$ given by~\eqref{frs} (since $\theta(x,y)$ is analytic in $\mathbb R^2$ for $64x<(1-4y)^2$).
The behaviour of $R$ and $S$ is illustrated in Figure~\ref{fig:Stilde-pos} (right). \end{proof}
\subsection{When $u<0$}
\begin{prop}
Let $u\in [-1,0)$. The series $R$, $S$ and $F'$ have radius
$\rho\equiv \rho_u$ given by~\eqref{rho-3}. As $z\rightarrow
\rho_u^-$, these three series admit an expansion of the
form~\eqref{exp-3}, with $\beta>0$. \end{prop} \begin{proof} As a preliminary remark, recall that $F(z,u)$ is $(u+1)$-positive, with several combinatorial interpretations described in Section~\ref{sec:tutte}. By Pringsheim's theorem, the radius of $F$ is also its smallest real positive singularity. By Corollary~\ref{positiv}, the same holds for $R$, $S$ and $\tilde S$. This will be used frequently in the proof, without further reference to Pringsheim's theorem.
As in the case $u>0$, we proceed in two steps, and study first the series $\tilde S$ defined by~\eqref{S-t-phi}, and then the series $R$ defined by~\eqref{R-t}.
Let us begin with $\tilde S$, and apply Proposition~\ref{lemmeasympt2} with $H(x,y)=y-u\Phi_2(x,y)$. Let us rule out the possibilities (i), (ii) and (iv).
\noindent \textbf{(i)} Could $\tilde S\equiv \tilde S(z,u)$ have an analytic continuation on $(0, +\infty)$? That is, an infinite radius of convergence? Corollary~\ref{positiv} implies that the radius of $ \tilde S$ is at most the radius of $\tilde S(z,-1)$, which counts (by the number of leaves) enriched \~S-trees with no flippable edge. Since these trees can have arbitrary large size (Figure~\ref{fig:peigne}), $ \tilde S(z,-1)$ is not a polynomial. Its coefficients are non-negative integers, and hence its radius is at most 1. The same thus holds for $\tilde S(z,u)$.
\begin{figure}
\caption{A cubic enriched \~S-tree with no flippable edge.}
\label{fig:peigne}
\end{figure}
\noindent \textbf{(ii)} By Lemma~\ref{positiv-bis}, the series $\pd {\Phi_2} y (z,\tilde S(z))$ has non-negative coefficients. Since its constant term is $0$, the function $1-u \pd {\Phi_2} y (z,\tilde S(z))$ is increasing on $[0, \tilde \rho)$, with initial value 1: this rules out~(ii).
\noindent \textbf{(iv)} By Corollary~\ref{positiv}, $\tilde S$ is negative and decreases on $[0, \tilde \rho)$. Assume that it tends to $ - \infty$. Since $\tilde \rho$ is finite, this implies that $$ \lim_{z\rightarrow \tilde \rho^-}\Psi_2 \left( \frac{z} {(1 -4\, \tilde S(z))^2} \right) = \Psi_2(0)=0. $$ But then~\eqref{S-t} gives $$ (1+\bar u)\,{\tilde S} = -u \sqrt{1-4 \tilde S}/2 + o\left(\sqrt{1-4
\tilde S} \right) ,$$ which is impossible if $\tilde S \rightarrow -\infty$.
We conclude that (iii) holds, so that $\Phi_2$ has no analytic continuation at $(\tilde \rho, \tilde S(\tilde \rho))$. Given the properties of $\Phi_2$ described in Section~\ref{sec:pp}, this means that $$ 64 \tilde \rho=(1-4\tilde S(\tilde \rho))^2. $$ The radius of $\tilde S$ is at least $\tilde \rho$, the value of which we will determine explicitely later.
Figure~\ref{fig:Stilde-neg} shows a plot of $\tilde S$ for
$u=-1/2$. One can in fact prove that $\tilde \rho$ \emm is, the radius of $\tilde
S$, but we will not use that.
\begin{figure}
\caption{A plot of $\tilde S(z)$ for $u=-1/2$ and $z\in[0,
\tilde \rho]$. The curve reaches the parabola
$64z=(1-4\tilde S)^2$ at $\tilde \rho$. The plot was obtained using
the expansion of $\tilde S(z)$ up to order 25. Plotting the pairs
$(R(z), S(z))$ for $z\in [0, \rho)$ gives the same curve.}
\label{fig:Stilde-neg}
\end{figure}
Let us now consider the equation~\eqref{R-t} that defines $R$, and apply Corollary~\ref{lemmeasympt} with
$\Omega(y)=y-u\Phi_1(y, \tilde S(y))$. We have just seen that, as $y$ goes from $0$ to $\tilde \rho$, the pair $(y, \tilde S(y))$ reaches for the
first time the critical parabola at $\tilde \rho$. Hence, with the
notation of Corollary~\ref{lemmeasympt}, the first
singularity of $\Omega$ on the positive real axis satisfies
$\omega\ge \tilde \rho$. Let us define $\tau$ and $\rho$ as in Corollary~\ref{lemmeasympt}.
Could it be that $\Omega'(\tau)=0$? By Corollary~\ref{lemmeasympt}, $R(z)$ increases on $(0, \rho)$ and $\Omega'(R(z))= 1/R'(z)\ge 0$. So could it be that $R'(z)$ tends to $+\infty$ as $z$ tends to $\rho$? No: by Corollary~\ref{positiv}, $\bar u(R'(z)-1)$ has non-negative coefficients, and thus is always larger that its value at $z=0$, which is $0$. Since $u<0$, this gives $R'(z)\le 1$ on $(0, \rho)$, and we conclude that $\Omega'(\tau)>0$. Hence $\tau=\omega\ge \tilde \rho$.
Since $R(z)$ increases from $0$ to $\omega$ on $[0, \rho]$, there
exists a unique $\hat \rho$ such that $R(\hat \rho)=\tilde \rho$.
Since $\tilde S$ has radius at least $\tilde \rho$, the series $S =
\tilde S( R)$ has also radius at least $\hat \rho$.
The plot of the pairs $(R(z), S(z))$ for $z\in [0, \hat \rho]$ coincides with the plot of $(z, \tilde S(z))$ for $z\in [0, \tilde \rho]$ shown in Figure~\ref{fig:Stilde-neg}.
We will now use the system~(\ref{phi1enpsi}--\ref{phi2enpsi}) defining $R$ and $S$ to obtain expansions of
$R$ and $S$ near $\hat \rho$. These expansions will be found to be singular
at $\hat \rho$: this implies that $\hat \rho=\rho$ is the radius of $R$ and $S$.
We adopt the following notation: $z=\hat \rho-x$, $R(z)=\tilde \rho-r$, $S(z)=S(\hat \rho)-s$ {and} \begin{equation}\label{TRS} \frac{R(z)}{(1-4S(z))^2}=\frac 1{64}-\vareps. \end{equation} The quantities $x$, $r$, $s$ and $ \vareps$ tend to $0$ as $z$ tends to $\hat \rho$.
Let us begin by expanding~\eqref{phi2enpsi} for $z$ close to $\hat \rho$. Using the expansion~\eqref{exp-psi2} of $\Psi_2$ near $1/64$, we obtain \begin{equation}\label{exp1} a_1+b_1s+c_1\vareps \ln \vareps + d_1\vareps = O(\vareps^2 \ln \vareps ) + O(s^2)+ O(s\,\vareps \ln \vareps), \end{equation} with $$ a_1=\frac{1+u}4 \delta^2 - \frac{u\sqrt 2} \pi \delta+ \frac{u-1}4, $$ $$ b_1= - \frac{2u \sqrt 2}{\pi \delta}+1+u, \quad c_1= \frac{4\sqrt 2}\pi u \delta , \quad d_1=3c_1, $$ and $\delta=\sqrt{1-4\tilde S(\tilde \rho)}$. In particular, $a_1$ must vanish, which gives the value of $\delta$: \begin{equation}\label{delta-c} \delta= \sqrt{1-4\tilde S(\tilde \rho)}= \frac{2\sqrt 2 u +
\sqrt{\pi^2(1-u^2)+8u^2}}{\pi(1+u)}. \end{equation} (The choice of a minus sign before a square root would give a negative value, which is impossible for $\delta=\sqrt{1-4\tilde S(\tilde \rho)}$.) Note that $u$ has a rational expression in terms of $\delta$: $$ u= - \frac{\pi(\delta^2-1)}{\pi \delta^2-4 \sqrt 2 \delta+\pi}. $$ We will replace all occurrences of $u$ by this expression, to avoid handling algebraic coefficients.
Let us now return to the expansion~\eqref{exp1}. Given that $\delta>0$, we have $b_1>0$ for $u \in[-1,0)$. Hence $s=O(\vareps \ln \vareps)$, and~\eqref{exp1} can be rewritten as \begin{equation}\label{exp2} b_1s+c_1\vareps \ln \vareps + d_1\vareps = O(\vareps^2 \ln^2 \vareps ). \end{equation} Let us now move to~\eqref{TRS}. Using $\tilde \rho= \delta^4/64$, it gives \begin{equation}\label{exp3}
b_2 s + d_2\vareps + e_2 r = O( \vareps^2 \ln^2 \vareps ) \end{equation} with $ b_2=-8 \delta^2, d_2= 64 \delta^4 , e_2=-64$.
Finally, the equation~\eqref{phi2enpsi} that defines $R$ gives \begin{equation}\label{exp4} a_3+b_3s + c_3 \vareps \ln \vareps + d_3 \vareps + e_3 r +f_3 x =O(\vareps^2 \ln ^2 \vareps ), \end{equation} where, in particular, $$ a_3=96(\pi \delta^2 -4\sqrt 2 \delta+\pi)\hat \rho+\delta^3(2\sqrt2\delta^2-3\pi\delta +4\sqrt 2). $$ Since $a_3$ must vanish, we obtain a rational expression of $\hat \rho$ in terms of $\delta$, and then, using~\eqref{delta-c}, an explicit expression which coincides with~\eqref{rho-3}. We do not give here the expressions of $b_3, c_3, d_3, e_3$ and $f_3$, which are rational in $\delta$. They are easy to compute. Let us just mention that $f_3\not =0$.
Now, using~\eqref{exp2},~\eqref{exp3} and~\eqref{exp4} in this order, we obtain for $s$, $r$ and finally $x$ expansions in $\vareps$ of the form \begin{eqnarray} s&= & c_4\,\vareps \ln \vareps + d_4\,\vareps + O(\vareps^2 \ln ^2 \vareps ), \label{s-eps} \\ r&=& c_5\,\vareps \ln \vareps + d_5\,\vareps + O(\vareps^2 \ln ^2 \vareps ),
\label{r-eps}\\ x&=& c_6\,\vareps \ln \vareps + d_6\,\vareps+ O(\vareps^2 \ln ^2\vareps ). \label{x-eps} \end{eqnarray} In particular, $c_6\not = 0$ for $u \in [-1,0)$ and the latter equation gives $ x\sim c_6 \,\vareps \ln \vareps, $ so that $\ln x \sim \ln \vareps$ and thus \begin{equation} \label{eps-x} \vareps = \frac x{c_6\ln x} \left(1+ o(1)\right). \end{equation} To conclude, we use~\eqref{x-eps} to express $\vareps \ln \vareps $ as a linear combination of $x$ and $\vareps$ (plus $O()$ terms), and~ \eqref{eps-x}
to express $\vareps $ in terms of $x$. This replaces~\eqref{s-eps}
and~\eqref{r-eps} by \begin{eqnarray*} s&= & \frac{c_4}{c_6} x + \frac{d_4c_6-c_4d_6}{c_6^2} \frac x{\ln x} (1+o(1)), \\ r&=& \frac{c_5}{c_6} x + \frac{d_5c_6-c_5d_6}{c_6^2} \frac x{\ln x} (1+o(1)) . \end{eqnarray*} These equations, written explicitely, read \begin{eqnarray*} S(z)&=& \frac {1-\delta^2}4+ \frac{4\pi}{\delta\sqrt{\pi^2(1-u^2)+8u^2}} (\hat \rho -z) - \frac{2\sqrt 2\pi}{u\delta}\frac{
\hat \rho -z}{\ln (\hat \rho -z)}\left(1+o(1)\right), \\ R(z)&=& \tilde \rho - \frac{\pi \delta}{2\sqrt{\pi^2(1-u^2)+8u^2}}(\hat \rho-z) - \frac{\sqrt 2 \pi \delta}{4u} \frac{
\hat \rho -z}{\ln(\hat \rho-z)}\left(1+o(1)\right), \end{eqnarray*} as $z\rightarrow \hat \rho$. In particular, $R$ and $S$ are singular at $\hat \rho$, so that $\hat \rho=\rho$ is their common radius.
Using~\eqref{F-cubic}, we finally compute an expansion of $F'(z)$ near $\rho$, which gives~\eqref{exp-3}. The coefficient $\beta$ of $(\rho-z)/\ln(\rho-z)$ does not vanish on $[-1,0)$,
and $F'$ has radius $\rho$ as well. \end{proof}
\section{Final comments} \label{sec:final}
\subsection{Universality}
Our asymptotic results remain incomplete when $p=3$, as we have not been able to obtain the asymptotic behaviour of $f_n(u)$ for negative values of $u$ (but only the singular behaviour of $F'(z,u)$). We still expect $f_n(u)$ to behave like $c_u \rho_u^{-n} n^{-3} (\ln n)^{-2}$, as in the 4-valent case.
We have also examined general even values of $p$. As explained below Theorem~\ref{thm:equations}, the series $S$ vanishes, so that we only deal with one equation (in $R$). When $p=6$ for instance, it reads: $$ R= z+u\Phi(R)= z+u \sum_{\ell \ge 1} \frac{(5\ell)!}{\ell! (4\ell+1)!} R^{2\ell+1}. $$ New difficulties arise from the periodicity of $\Phi$ and $R$, but we still expect the same behaviour for the numbers $f_n(u)$, even though $R$ and $F$ will have multiple singularities on their circle of convergence.
We also plan the study of general (non-regular) forested maps.
\subsection{A differential equation involving $F$, rather than $F'$?}
The two differential equations (DEs) obtained for the series $F$ in Section~\ref{sec:de}, for the 4-valent, and then for the cubic case, are in fact equations of order 2 satisfied by $F'$. It is natural to ask if $F$ itself satisfies a DE of order 2. Let us examine in detail the case $p=4$.
Returning to Lemma~\ref{lem:cont}, we first need an expression of $\bar M$. Since $t_{2i+1}=t^c_{2i+1}=0$ when $p=4$, we can content ourselves with an expression of $\bar M$ valid when $g_{2i+1}=0$ for all $i$. Such an expression is easily obtained from the expression~\eqref{relM} of $\bar M'_z$. Indeed, $S=0$ in the even case, and the equations~\eqref{relM} and~\eqref{relS}, written as $$ \bar M'_z= \bar \theta(R), \quad R=z+u\bar \Phi(R), $$ imply at once $$ \bar M= \bar \Psi(R) $$ where \begin{eqnarray*} \bar \Psi(x)&=& \int \bar \theta(x) \left( 1-u \bar \Phi'(x)\right) dx \\
&=& \sum_{i\ge 1} h_{2i} {2i \choose i} \frac{x^{i+1}}{i+1} -u \sum_{i\ge 1, j \ge 0} h_{2i} g_{2j+2} ( {2j+1} ){2i \choose i}{2j\choose j} \frac {x^{i+j+1}}{i+j+1}. \end{eqnarray*} This should be compared to Eq.~(1.4)in~\cite{bg-continuedfractions}, which reads, in the even case, $$
\bar M= \sum_{n\ge 1} h_{2n} {2n \choose n}\frac{ R^{n+1}}{n+1} -u
\sum_{n\ge 1, q\ge 0, k>q}h_{2n} g_{2k} {2n+2q \choose n+q} {2k-2q-2 \choose k-q-1}\frac{ R^{n+k}}{n+q+1}. $$ Our (simpler) expression is obtained by summing over $q$.
Hence for $p=4$, Lemma~\ref{lem:cont} gives \begin{equation}\label{Fzu-expl} F(z,u)= 4\sum_{i\ge 2}\frac{(3i-3)!}{(i-2)! i!^2} \frac{R^{i+1}}{i+1}
-u \sum_{i\ge 2, j \ge 1}\frac{(3i-3)!}{(i-2)!
i!^2}\frac{(3j)!}{j!^3} \frac {R^{i+j+1}}{i+j+1} =\Psi(R), \end{equation} where $\Psi(x)= \Psi_1(x)- u \Psi_2(x)$, $$ \Psi_1(x) \int \theta(x), \quad \quad \Psi_2(x) =\int \theta(x) \Phi'(x) dx, $$ and now $R$ is defined by $R=z+u\Phi(R)$, where $\Phi$ is given by~\eqref{theta-phi-4V}.
Now assume that $F$ is differentially algebraic of order 2: there exists a non-zero polynomial $P$ such that $$ P( F, F', F'', z, u)=0. $$ Equivalently, $$ P(\Psi(R), \theta(R), R'\theta'(R),z,u)=0. $$ Using $z=R-u\Phi(R)$, $R'=(1-u\Phi'(R))^{-1}$ and the equations~\eqref{phi-second} and~\eqref{thetrat} that relate $\theta$, $\Phi$, and their derivatives, we conclude that either $\Psi(x)$ is algebraic over $\mathbb Q(x, u, \Phi(x), \Phi'(x))$, or $\Phi$ and $\Phi'$ are algebraically related over $\mathbb Q(x)$. Let us examine these two possibilities.
\noindent 1. Can $\Psi(x)$ be algebraic over $\mathbb Q(x, u, \Phi(x), \Phi'(x))$? Given that $$ 15 \Psi_1(x)= 15 \int \theta(x)dx= 54 x^2-2(1+81x) \Phi(x) +8x(27x-1) \Phi'(x) $$ and $$ 3\Psi_2(x)= 3\int \theta(x) \Phi'(x) dx= 12 x \Phi(x) -2(1-27x)\Phi(x) \Phi'(x) -48\Phi^2(x) + 12 \int \frac{\Phi^2(x)}{x} dx , $$ this is equivalent to saying that $\int \Phi(x)^2/x dx$ is algebraic over $\mathbb Q(x,\Phi(x), \Phi'(x))$. Or, using~\eqref{F-hg}, that the hypergeometric function $$ f(x)=\ _2F_1\left(\frac 1 3,\frac 2 3;2;x\right) $$ is such that $g(x):= \int xf^2(x)dx$ is algebraic over $\mathbb Q(x, f(x) , f'(x))$ (here, we use the fact that $$ \left. 20 \int xf(x) dx= 9 x^2 f(x) +9x^2(1-x) f'(x).\right) $$ A related question is whether $g$ is a linear combination of $
f^2, ff', (f')^2$. Given that $$ 2f(x)+18(x-1)f'(x)+9(x-1)f''(x)=0, $$ the vector space spanned over $\mathbb Q(x)$ by these 3 series contains all products $f^{(i)} f^{(j)}$ and is closed by differentiation. This would imply that $g$ satisfies a linear DE of order 4 with coefficients in $\mathbb Q(x)$. Starting from the order 4 DE satisfied by $g'$, $$ -4g'(x)+8x(x-1)g''(x) +27x(x-1)^2g^{(3)}(x) +9x^2(x-1)^2g^{(4)}(x)=0, $$ the Maple command {\tt ode\_int\_y} tells us that $g$ satisfies no linear DE of order 4. Following discussions with Alin Bostan and Bruno Salvy, this seems to imply actually that $g$ is not algebraic over $\mathbb Q(x,f,f')$.
\noindent 2. Now could it be that $F'$ satisfies a DE of order 1? This would imply that $\Phi$ and $\Phi'$ are algebraically linked over $\mathbb Q(x)$, or, equivalently, that $f$ and $f'$ are algebraically linked over $\mathbb Q(x)$. One can prove that this is not the case, by combining the fact that $f'(x)$ diverges at 1 like $\ln(1-x)$, while $f(1)=\sqrt 3 /(12 \pi)$ is finite and transcendental.
These considerations lead us to believe that we have found in Section~\ref{sec:de4} the equation of minimal order satisfied by $F$.
\noindent{\bf Acknowledgements.}
We are grateful to Yvan Le Borgne and Andrea Sportiello for interesting discussions on this problem, and to Alin Bostan and Bruno Salvy for their help with hypergeometric series.
\end{document} | arXiv |
Numerical Simulation of the Impact of Urban Non-uniformity on Precipitation
Yuqiang SONG1,2,
Hongnian LIU1,
Xueyuan WANG1,
Ning ZHANG1,
Jianning SUN1
School of Atmospheric Sciences, Nanjing University, Nanjing 210093
Dalian Meteorological Station, Dalian 116000
Full Text(HTML)
Figures(0) / Table(0)
To evaluate the influence of urban non-uniformity on precipitation, the area of a city was divided into three categories (commercial, high-density residential, and low-density residential) according to the building density data from Landsat satellites. Numerical simulations of three corresponding scenarios (urban non-uniformity, urban uniformity, and non-urban) were performed in Nanjing using the WRF model. The results demonstrate that the existence of the city results in more precipitation, and that urban heterogeneity enhances this phenomenon. For the urban non-uniformity, uniformity, and non-urban experiments, the mean cumulative summer precipitation was 423.09 mm, 407.40 mm, and 389.67 mm, respectively. Urban non-uniformity has a significant effect on the amount of heavy rainfall in summer. The cumulative precipitation from heavy rain in the summer for the three numerical experiments was 278.2 mm, 250.6 mm, and 236.5 mm, respectively. In the non-uniformity experiments, the amount of precipitation between 1500 and 2200 (LST) increased significantly. Furthermore, the adoption of urban non-uniformity into the WRF model could improve the numerical simulation of summer rain and its daily variation.
urban non-uniformity,
urban precipitation,
WRF model
Chen L. X., W. Q. Zhu, and X. J. Zhou, 2000: Characteristics of environmental and climate change in Changjiang Delta and its possible mechanism. Acta Meteorologica Sinica, 14( 2), 129- 140. (in Chinese)9bdea2e50286dcb8dcf496cc8a0801fbhttp%3A%2F%2Fwww.cnki.com.cn%2FArticle%2FCJFDTotal-QXXW200002000.htmhttp://www.cnki.com.cn/Article/CJFDTotal-QXXW200002000.htmCharacteristics of climate change in the Changjiang Delta were analyzed based on the annualmean meteorological data since 1961,including air temperature,maximum and minimum airtemperature,precipitation,sunshine duration and visibility at 48 stations in that area(southernJiangsu and northern Zhejiang),and its adjacent areas(northern Jiangsu,eastern Anhui andsouthern Zhejiang),together with the environmental data.The results indicate that it is gettingwarmer in the Changjiang Delta and cooler in adjacent areas,thus the Changjiang Delta becomes a bigheat island,containing many little heat islands consisting of central cities,in which Shanghai City isthe strongest heat island.The intensity of heat islands enhances as economic development goes up.From the year 1978.the beginning year of reform and opening policy,to the year 1997,the intensityof big heat island of Changjiang Delta has increased 0.5℃ and Shanghai heat island increased 0.8℃.However.since 1978 the constituents of SO 2 ,NO x and TSP(total suspended particles)in theatmosphere,no matter whether in the Changjiang Delta or in the adjacent areas,have all increased,but pH values of precipitation decreased.In the meantime,both sunshine duration and visibility arealso decreased,indicating that there exists a mechanism for climate cooling in these areas.Ouranalyses show that the mechanism for climate warming in the Changjiang Delta may be associatedwith heating increase caused by,economic development and increasing energy consumption.It isestimated that up to 1997 the intensity of warming caused by this mechanism in the Changjiang Deltahas reached 0.8—0.9℃,about 4—4.5 times as large as the mean values before 1978.Since then,the increase rate has become 0. 035℃/a for the Changjiang Delta.It has reached 1.3℃ for Shanghaiin 1997,about 12—13 times as large as the mean values before 1978.This is a rough estimation ofincreasing energy consumption rate caused by economic development.
Hu X. M., P. M. Klein, and M. Xue, 2013: Impact of low-level jets on the nocturnal urban heat island intensity in Oklahoma Journal of Applied Meteorology and Climatology, 52( 8), 1779- 1802.
Jauregui E., E. Romales, 1996: Urban effects on convective precipitation in Mexico City. Atmos. Environ., 30( 20), 3383- 3389.10.1016/1352-2310(96)00041-6438a4a3713b008ae952ab443ac6a555dhttp%3A%2F%2Fwww.sciencedirect.com%2Fscience%2Farticle%2Fpii%2F1352231096000416http://www.sciencedirect.com/science/article/pii/1352231096000416This paper reports on urban-related convective precipitation anomalies in a tropical city. Wet season (May–October) rainfall for an urban site (Tacubaya) shows a significant trend for the period 1941–1985 suggesting an urban effect that has been increasing as the city grew. On the other hand, rainfall at a suburban (upwind) station apparently unaffected by urbanization, has remained unchanged. Analysis of historical records of hourly precipitation for an urban station shows that the frequency of intense (> 20 mm h 611 ) rain showers has increased in recent decades. Using a network of automatic rainfall stations, areal distribution of 24 h isoyets show a series of maxima within the urban perimeter which may be associated to the heat island phenomenon. Isochrones of the beginning of rain are used to estimate direction and speed of movement of the rain cloud cells. The daytime heat island seems to be associated with the intensification of rain showers.
Li S. Y., H. B. Chen, and W. Li, 2008: The impact of urbanization on city climate of Beijing region. Plateau Meteorology, 27( 5), 1102- 1110. (in Chinese)10.3724/SP.J.1047.2008.000149c0f839f-ad74-4b45-a37b-34a85aec5f68mag4842620082751102ef6054e2706e59d808f1431f034b203chttp%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-GYQX200805020.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-GYQX200805020.htmThe impact of urban growth on city climate variation is studied using the daily mean data of temperature/velocity and precipitation at 20 meteorological stations from 1970 to 2005.The results show that:(1)In the past 36 years the urban heat island(UHI) area is increasing,the UHI intensity is enhancing,and the UHI centers are evolving from single to several centers.In 2000′s,the maximal UHI intensity is 2.11℃.In the past 36 years the mean temperature in winter increases 0.298℃/10a.(2)The urbanization has made the precipitation show a tendency of uneven distribution.In 1970′s,the precipitation in the west of the city is much,while in the southeast of the city is little;in 1980′s,all the urban zone′s precipitation is little;in 1990′s,the precipitation in both west and south of the city is much,while in the northeast of the city is little;in 2000′s,the little precipitation zone extends from urban district to the southeast.(3)The urban wind speed has a decreasing tendency.The wind speed in 1970′s is 2.49 m·s-1,in 1980′s,is 2.32 m·s-1,in 1990′s,is 2.16 m·s-1,and in 2000′s,is 2.28 m·s-1.In the past 36 years the wind speed decreases 0.05 m·s-1·(10a)-1.(4)The temperature and the population density logarithm have a linear correlation,the correlative coefficient is 0.65;the temperature and the city land area have a linear correlation,the correlative coefficient is 0.6387.
Liao J. B., X. M. Wang, Y. X. Li, and B. C. Xia, 2011: An analysis study of the impacts of urbanization on precipitation in Guangzhou. Scientia Meteorologica Sinica, 31( 4), 384- 390. (in Chinese)10.1007/s00376-010-1000-52d699413e0b621ba7d614641edd40cb8http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-QXKX201104003.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-QXKX201104003.htmUsing the observation data from 1959 to 2009,the precipitation variation in Guangzhou was studied.We found that the trends of annual total precipitation in suburban Zengcheng station is not obvious,but number of the heavy rain days has increased;the total annual precipitation in Guangzhou fluctuated and there is a slight upward trend with the increasing rate of 10.5 mm/a since 1991.The total precipitation days tend to decrease with the decreasing rate of 7.2 d/10a since 1982,what's more,the heavy rain days and rainfall precipitation levels were significantly rising,the increasing rate of heavy precipitation days is 2.8 d/10a while the growth rate of precipitation load is 2.4%/10a.Urbanization in Guangzhou have caused heavy rain to occur more frequently in Guangzhou.Compared to pre-urbanization,the precipitation in Guangzhou indicated a significant increase since 1991.The contribution rate to the precipitation increase due to urbanization in Guangzhou is 44.7%.
Lowry W. P., 1998: Urban effects on precipitation amount. Progress in Physical Geography, 22( 4), 477- 520.10.1177/0309133398022004031b82f4fa6d18efe156423aa277d9e2dfhttp%3A%2F%2Fdialnet.unirioja.es%2Fservlet%2Farticulo%3Fcodigo%3D453977http://dialnet.unirioja.es/servlet/articulo?codigo=453977Major reviews of urban effects on local climate, extending from Kratzer in 1937 through to Landsberg in 1981, have dealt primarily with radiation, temperature, wind, and air quality. To a much lesser extent they have examined moisture-related elements including humidity, cloud, precipitation, and storminess. Selecting air temperature to represent the former group and precipitation amount to represent the latter, the author asserts that, because of the intrinsic physical differences between them, there are necessarily important differences in the methods to be used for their proper observation, analysis, presentation, and interpretation pertaining to urban effects. The principal differences are based in the fact that temperature is continuous in both time and space, whereas precipitation is continuous in neither. The author maintains that because of these differences, urban climatologists have had much greater success in specifying and explaining urban effects on temperature than on precipitation amount. Further, he makes the case that, lack of recognition that methods used for the study of urban effects on temperature are too often inappropriate for study of urban effects on precipitation amount, has led to a state of affairs where there remains basic uncertainty about the specification of urban effects on precipitation amount, and even greater uncertainty about their explanation. In making that case, the author includes 1) an historical perspective, 2) a critical evaluation of methods, 3) an overview of the status of urban precipitation climatology, and 4) recommendations concerning future research.
Miao S. G., F. Chen, Q. C. Li, and S. Y. Fan, 2011: Impacts of urban processes and urbanization on summer precipitation: A case study of heavy rainfall in Beijing on 1 August 2006. Journal of Applied Meteorology and Climatology, 50( 4), 806- 825.10.1007/s13143-014-0016-70a1c7909-2aee-4694-9658-3eee0bfcfd8b0825dd9bf898e5debb8e206546d89a31http%3A%2F%2Flink.springer.com%2F10.1007%2Fs13143-014-0016-7refpaperuri:(2053229b2072a9fd72ff8134e3006e55)http://link.springer.com/10.1007/s13143-014-0016-7Weather and climate changes caused by human activities (e.g., greenhouse gas emissions, deforestation, and urbanization) have received much attention because of their impacts on human lives as well as scientific interests. The detection, understanding, and future projection of weather and climate changes due to urbanization are important subjects in the discipline of urban meteorology and climatology. This article reviews urban impacts on precipitation. Observational studies of changes in convective phenomena over and around cities are reviewed, with focus on precipitation enhancement downwind of cities. The proposed causative factors (urban heat island, large surface roughness, and higher aerosol concentration) and mechanisms of urban-induced and/or urban-modified precipitation are then reviewed and discussed, with focus on downwind precipitation enhancement. A universal mechanism of urban-induced precipitation is made through a thorough literature review and is as follows. The urban heat island produces updrafts on the leeward or downwind side of cities, and the urban heat island-induced updrafts initiate moist convection under favorable thermodynamic conditions, thus leading to surface precipitation. Surface precipitation is likely to further increase under higher aerosol concentrations if the air humidity is high and deep and strong convection occurs. It is not likely that larger urban surface roughness plays a major role in urbaninduced precipitation. Larger urban surface roughness can, however, disrupt or bifurcate precipitating convective systems formed outside cities while passing over the cities. Such urban-modified precipitating systems can either increase or decrease precipitation over and/or downwind of cities. Much effort is needed for in-depth or new understanding of urban precipitation anomalies, which includes local and regional modeling studies using advanced numerical models and analysis studies of long-term radar data.
Rosenfeld D., 2000: Suppression of rain and snow by urban and industrial air pollution. Science, 287( 5459), 1793- 1796.10.1016/j.ijfoodmicro.2014.11.023107103021af0481b2ecbe261f403a76bf5bd1c8ehttp%3A%2F%2Fwww.ncbi.nlm.nih.gov%2Fpubmed%2F10710302%3Fdopt%3DAbstracthttp://www.ncbi.nlm.nih.gov/pubmed/10710302?dopt=AbstractOur method for the analysis of quantitative microbial data shows a good performance in the estimation of true prevalence and the parameters of the distribution of concentrations, which indicates that it is a useful data analysis tool in the field of QMRA.
Shepherd J. M., H. Pierce, and A. J. Negri, 2002: Rainfall modification by major urban areas: Observations from spaceborne rain radar on the TRMM satellite. J. Appl. Meteor., 41( 7), 689- 701.10.1175/1520-0450(2002)041<0689:RMBMUA>2.0.CO;25607a620-c68e-4cb9-acd3-6041f87f23b60b54b7b2e4dc8fcc409170accd48091fhttp%3A%2F%2Fci.nii.ac.jp%2Fnaid%2F10013125690%2Frefpaperuri:(16e55b4179ade8c71ec65990efcaf437)http://ci.nii.ac.jp/naid/10013125690/Data from the Tropical Rainfall Measuring Mission (TRMM) satellite's precipitation radar (PR) were employed to identify warm-season rainfall (1998-2000) patterns around Atlanta, Georgia; Montgomery, Alabama; Nashville, Tennessee; and San Antonio, Waco, and Dallas, Texas. Results reveal an average increase of about 28% in monthly rainfall rates within 30-60 km downwind of the metropolis, with a modest increase of 5.6% over the metropolis. Portions of the downwind area exhibit increases as high as 51%. The percentage changes are relative to an upwind control area. It was also found that maximum rainfall rates in the downwind impact area exceeded the mean value in the upwind control area by 48%-116%. The maximum value was generally found at an average distance of 39 km from the edge of the urban center or 64 km from the center of the city. Results are consistent with the Metropolitan Meteorological Experiment (METROMEX) studies of St. Louis, Missouri, almost two decades ago and with more recent studies near Atlanta. The study establishes the possibility of utilizing satellite-based rainfall estimates for examining rainfall modification by urban areas on global scales and over longer time periods. Such research has implications for weather forecasting, urban planning, water resource management, and understanding human impact on the environment and climate.
Skamarock W.C., Coruthors, 2008: A description of the advanced research WRF version 3. NCAR Tech. Note NCAR/ TN-475+STR,88 pp.10.5065/D68S4MVH6e1e8ed5238484bf7e6021f9957054e6http%3A%2F%2Fwww.researchgate.net%2Fpublication%2F244955031_A_Description_of_the_Advanced_Research_WRF_Version_2http://www.researchgate.net/publication/244955031_A_Description_of_the_Advanced_Research_WRF_Version_2The development of the Weather Research and Forecasting (WRF) modeling system is a multiagency effort intended to provide a next-generation mesoscale forecast model and data assimilation system that will advance both the understanding and prediction of mesoscale weather and accelerate the transfer of research advances into operations. The model is being developed as a collaborative effort ort among the NCAR Mesoscale and Microscale Meteorology (MMM) Division, the National Oceanic and Atmospheric Administration's (NOAA) National Centers for Environmental Prediction (NCEP) and Forecast System Laboratory (FSL), the Department of Defense's Air Force Weather Agency (AFWA) and Naval Research Laboratory (NRL), the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma, and the Federal Aviation Administration (FAA), along with the participation of a number of university scientists. The WRF model is designed to be a flexible, state-of-the-art, portable code that is an efficient in a massively parallel computing environment. A modular single-source code is maintained that can be configured for both research and operations. It offers numerous physics options, thus tapping into the experience of the broad modeling community. Advanced data assimilation systems are being developed and tested in tandem with the model. WRF is maintained and supported as a community model to facilitate wide use, particularly for research and teaching, in the university community. It is suitable for use in a broad spectrum of applications across scales ranging from meters to thousands of kilometers. Such applications include research and operational numerical weather prediction (NWP), data assimilation and parameterized-physics research, downscaling climate simulations, driving air quality models, atmosphere-ocean coupling, and idealized simulations (e.g boundary-layer eddies, convection, baroclinic waves).*WEATHER FORECASTING
Song J., J. P. Tang, and J. N. Sun, 2009: Simulation study of the effects of urban canopy on the local meteorological field in the Nanjing area. Journal of Nanjing University (Natural Sciences), 45( 6), 779- 789. (in Chinese)10.1360/972008-2143a6c0f887befc66dd8aab329f074ec80ehttp%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-NJDZ200906008.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-NJDZ200906008.htmNumerical experiments were made to investigate the effects of urban canopy on local meteorological field in Nanjing area,from July 17th to July 18th,2005,by using the Weather Research and Forecasting Model(WRF).Three types of surface conditions were employed in the simulations: the wrf-no urban case,in which the natural surface was chosen;the wrf-ucm case,in which the urban surface was chosen but the Urban Canopy Model(UCM) was not used;the wrf+ucm case,in which the urban surface was chosen,and the UCM was employed.The results of numerical experiments were compared with the data of the field observations.This study shows that the temperature at 2 m in the wrf+ucm case is slightly lower than that in the wrf-ucm case,and both are higher than that in the wrf-no urban case.Meanwhile,the sensible heat flux of the urban surface with canopy is similar to that of urban surface without canopy in the daytime,while the former is slightly higher than the latter during the night.The values in the two cases are obviously higher than in the case with natural surface in the area.However,the latent heat flux in the wrf+ucm case is lower than that in the wrf-ucm case,and both are much lower than that in the wrf-no urban case.That is to say,urban ground makes the urban fields drier than the natural ones.On the other hand,the effects of urban canopy influence the air flow in the area,which makes the horizontal wind be reduced over the city,and consequently the vertical motion is enhanced.It seems that this influence is more obvious during the night than in the daytime.
Song Y. Q., H. N. Liu, X. Y. Wang, N. Zhang, and J. N. Sun, 2014a: The influence of urban heterogeneity on the surface energy balance and characters of temperature and wind. Journal of Nanjing University (Natural Sciences), 50( 6), 810- 819. (in Chinese)
Song Y. Q., H. N. Liu, Y. Zhu, and X. Y. Wang, 2014b: Numerical simulation of urban heterogeneity's influence on urban meteorological characteristic. Plateau Meteorology, 33( 6), 1579- 1588. (in Chinese)10.7522/j.issn.1000-0534.2013.00080b831a731-2894-4551-8b54-079b61be6255mag4842620143361579The Nanjing city was divided into three types according the building density: Commercial, Hi-dens Res(High intensity residential), Low-dens Res(Low intensity residential). The influence of urban heterogeneity on urban meteorological characteristic in Nanjing was researched by WRF model. The results show that: After considering the effect of the urban heterogeneity, the spatial distribution of temperature, the urban heat island, the relative humidity and the winds exhibit are more complex in the urban region. In the simulations of urban canopy, urban heterogeneity has obvious effects on the heat island and other meteorological characters.The simulated mean heat island intensity, dry island intensity and decrease of wind of city will decrease 0.02℃, 0.2% and 0.11 m·s<sup>-1</sup>. But the maximum heat island intensity and dry island intensity of city will increase 0.28℃ and 1.51%. In the city of considering heterogeneity, the spaial distribution variances of urban heat island, dry island and decrease of wind will increase 0.06, 2.08 and 0.28.
Sun J. S., B. Yang, 2008: Meso- scale torrential rain affected by topography and the urban circulation. Chinese Journal of Atmospheric Sciences, 32( 6), 1352- 1364. (in Chinese)10.3878/j.issn.1006-9895.2008.06.105208fcfa-da3b-481c-8734-ddc4e916ae8c4825320083269Some theoretical features of meso-β scale torrential rain, which are caused by joint action of topography and the urban heat island, are gained by mesoscale dynamic meteorology theory and scale analysis. Using observation datasets with high spatial-temporal resolution based on auto-weather station network and wind profile data from two profilers which are located at different positions, most of the theoretical features are confirmed by three cases which occurred in Beijing in the summer of 2006. The results indicate that (1) the temperature gradient in front of mountains, mainly caused by the urban heat island, is able to engender a relatively isolated vertical wind shear near the windward slope, and the shear is much more important to grow, develop and maintain the mesoscale convective system. The closer the mountain is to urban areas, the stronger the temperature gradient in front of mountains is, and the local stronger vertical wind shear is easy to be at the position. On the other hand, the response time of strong vertical wind shear depends on the intensity of temperature gradient. (2) Once stronger convective precipitation begins on the windward slope, the positive feedback between rainfall intensity and horizontal wind velocity toward the windward slope will appear, and the process is an essential condition to form meso-β scale torrential rain. (3) The stronger the terrain grade is, the stronger ascending motion will be forced and the smaller horizontal-scale mesoscale weather system will be stirred; in front of smoother topography, however, the mesoscale system at a relatively larger horizontal scale is easy to be formed. (4) generally, most of the mesoscale torrential rain processes, which are caused by joint influence of topography and thermodynamic urban circulation, should occur in front of mountains in the evening or the early morning.
Sun J. S., H. Wang, L. Wang, F. Liang, Y. X. Kang, and X. Y. Jiang, 2006: The role of urban boundary layer in local convective torrential rain happening in Beijing on 10 July 2004. Chinese J. Atmos. Sci., 30( 2), 221- 234. (in Chinese)98bef774-50b0-4007-a50b-7def9d36e94e76505b89b766b97d046d5acb398cfb99http%3A%2F%2Fen.cnki.com.cn%2Farticle_en%2Fcjfdtotal-dqxk200602004.htmrefpaperuri:(3750b9c219257f7d898babd9e9205148)http://en.cnki.com.cn/article_en/cjfdtotal-dqxk200602004.htmAn isolated mesoscale convective torrential rain which happened in Beijing urban on 10 July 2004("7.10") made a great traffic trouble because of serious inundation cross the urban areas and caught various social attention.The triggering mechanism of the convective torrential rain and the reason that the downpour occurred only in urban center are studied by analyzing a large number of observation data sets,such as observational data with high spatial-temporal resolution based on auto-weather station network,Doppler radar observational products,available vertical distribution of wind detected by a boundary wind profiler,TBB data from GOES and conventional weather observational data sets.Based on a simple mesoscale theoretical analysis and detail observational investigation,the spatial structure of the weather system is proposed.The research results indicate that(1) the local vapor condition and the large-scale vapor transportation are favorable during the torrential rain.However,the large-scale descending area has been keeping inhibition during the weather event,and this is a great difference between the isolated meso-scale convective storm system(MCSS) and other mesoscale torrential rain events which happen in regional precipitation;(2) The convective activities in Beijing area are closely related to gravity wave.At the initial stage of "7.10" local torrential rainfall,the local convective instable energy is possible to be triggered by gravity wave which is motivated by the stronger convective activities in Laishui and Yixian counties of Hebei Province to the southwest of Beijing and a series of relatively isolated meso- scale convection cells (MCCs),which appear to be linear,develop in Beijing.Finally,a meso- scale convective storm system is organized by urban mesoscale convergence line.The MCSS not only causes the heaviest rain intensity in the urban center,but also excites gravity wave and brings forth the similar meso- scale convection cells again.When the meso- scale convection cells are reorganized,a meso- scale convective storm system reappears in the urban areas.However,the second heaviest rain intensity is debilitated obviously compared with the former MCSS because the local instable energy have been released partly during the first precipitation period;(3) prior to the torrential rain,a mesoscale convergence line can be observed not only in urban surface but also in total boundary layer above,which plays a key role in organizing the isolated meso- scale convection cells(MCCs).The research confirms that the thermodynamic forcing caused by the temperature difference between urban areas and suburbs,is a fundamental factor in the development of the convergence line.On the other hand,because of the thermodynamic difference between urban areas and suburbs,the vertical wind shear is strengthened in the urban center,and the horizontal flow in lower layer is accelerated in suburbs,in other words,the thermodynamic forcing is advantageous to keeping the stronger convergence motion toward central convection area and providing enough compensated moisture current around a relative large field.
Wu X., X. Y. Wang, X. N. Zeng, and L. Xu, 2000: The effect of urbanization on short duration precipitation in Beijing. Journal of Nanjing Institute of Meteorology, 23( 1), 68- 72. (in Chinese)10.1142/S175882511200130076967d4b5c54e92cbd0ef69d08739b67http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-NJQX200001010.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-NJQX200001010.htmHour precipitation from AWS in urban and suburb of Beijing is analysed to study the effects of urbanization on short duration precipitation.Results show that hour precipitation can be fitted to logarithm Weibull distribution and that the enhancement of rainfall due to urbanization is remarkable in downwind under moderate/heavy short duration precipitation process,and that the increase of probability and intensity of torrential rain is significant in urban center.
Zhang C. L., S. G. Miao, Q. C. Li, and F. Chen, 2007: Impacts of fine-resolution land use information for Beijing on a summer, severe rainfall simulation. Chinese Journal of Geophysics, 50( 5), 1373- 1382. (in Chinese)10.1002/cjg2.1136ba5b0ff1-3750-42dc-8a93-6f603c25e717f7a22f1b6f1dfd036655e5c9ec5d3af6http%3A%2F%2Fonlinelibrary.wiley.com%2Fdoi%2F10.1002%2Fcjg2.1136%2Fpdfrefpaperuri:(ab8770c39e864ce201c7e9b5c7406362)http://en.cnki.com.cn/Article_en/CJFDTotal-DQWX200705011.htmUsing the land use data around Beijing in 2000 with the resolution of 500 m,we updated the U.S.Geological Survey global land use classification data for numerical weather model,in which there are 25 types with 30 s lat-lon equidistant grids(1 km resolution).And then by 24-hour numerical experiments with the MM5V3.6 coupled with Noah LSM system,two domain two-way nested with the resolution of 10-3.3 km,we investigated the impact of fine-resolution land use information incorporation on a summer severe rainfall in Beijing.Analyses show that,the new land use data can not only represent better the real characteristic of underlying surface around Beijing area,especially the rapid expanding of urban/built-up areas since 1990s',but also help to correct the unreasonable classification Savanna in the original USGS data for the middle-latitudes of Asia data as the deciduous broadleaf.Furthermore,numerical experiments prove that incorporation of the fine-resolution land use information has a significant impact on the short-range severe rainfall weather event.For the intensity and location of major rainfall centers,their difference ranges of 12 h rainfall amount are beyond 30 km,and the relative difference of the maximum rainfall amount reaches up to 30%.One important interaction mechanism between urban underlying surface and atmosphere is also revealed,that is,the urban expansion reduces natural vegetation cover,and then it can help to decrease ground evaporation and local water vapor supply,enlarge surface sensible heat flux,deepen PBL height and enhance the mixing of water vapor.Hence it is not conducive to the occurrence of the rainfall.
[1] MIAO Yucong, LIU Shuhua, CHEN Bicheng, ZHANG Bihui, WANG Shu, LI Shuyan, 2013: Simulating Urban Flow and Dispersion in Beijing by Coupling a CFD Model with the WRF Model, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1663-1678. doi: 10.1007/s00376-013-2234-9
[2] MIAO Yucong, LIU Shuhua, ZHENG Hui, ZHENG Yijia, CHEN Bicheng, WANG Shu, 2014: A Multi-Scale Urban Atmospheric Dispersion Model for Emergency Management, ADVANCES IN ATMOSPHERIC SCIENCES, 31, 1353-1365. doi: 10.1007/s00376-014-3254-9
[3] Ui-Yong BYUN, Jinkyu HONG, Song-You HONG, Hyeyum Hailey SHIN, 2015: Numerical Simulations of Heavy Rainfall over Central Korea on 21 September 2010 Using the WRF Model, ADVANCES IN ATMOSPHERIC SCIENCES, 32, 855-869. doi: 10.1007/s00376-014-4075-6
[4] Dongmei XU, Feifei SHEN, Jinzhong MIN, Aiqing SHU, 2021: Assimilation of GPM Microwave Imager Radiance for Track Prediction of Typhoon Cases with the WRF Hybrid En3DVAR System, ADVANCES IN ATMOSPHERIC SCIENCES, 38, 983-993. doi: 10.1007/s00376-021-0252-6
[5] SUN Jianhua, ZHAO Sixiong, XU Guangkuo, MENG Qingtao, 2010: Study on a Mesoscale Convective Vortex Causing Heavy Rainfall during the Mei-yu Season in 2003, ADVANCES IN ATMOSPHERIC SCIENCES, 27, 1193-1209. doi: 10.1007/s00376-009-9156-6
[6] Jun WANG, Jinming FENG, Qizhong WU, Zhongwei YAN, 2016: Impact of Anthropogenic Aerosols on Summer Precipitation in the Beijing-Tianjin-Hebei Urban Agglomeration in China: Regional Climate Modeling Using WRF-Chem, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 753-766. doi: 10.1007/s00376-015-5103-x
[7] Xinguan DU, Haishan CHEN, Qingqing LI, Xuyang GE, 2023: Urban Impact on Landfalling Tropical Cyclone Precipitation: A Numerical Study of Typhoon Rumbia (2018), ADVANCES IN ATMOSPHERIC SCIENCES. doi: 10.1007/s00376-022-2100-8
[8] LIU Huizhi, LIANG Bin, ZHU Fengrong, ZHANG Boyin, SANG Jianguo, 2003: A Laboratory Model for the Flow in Urban Street Canyons Induced by Bottom Heating?, ADVANCES IN ATMOSPHERIC SCIENCES, 20, 554-564. doi: 10.1007/BF02915498
[9] HU Wei, ZHONG Qin, 2010: Using the OSPM Model on Pollutant Dispersion in an Urban Street Canyon, ADVANCES IN ATMOSPHERIC SCIENCES, 27, 621-628. doi: 10.1007/s00376-009-9064-9
[10] Ning ZHANG, Yunsong DU, Shiguang MIAO, 2016: A Microscale Model for Air Pollutant Dispersion Simulation in Urban Areas: Presentation of the Model and Performance over a Single Building, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 184-192. doi: 10.1007/s00376-015-5152-1
[11] Lin Naishi, Zhou Zugang, Zhou Liufei, 1998: An Analytical Study on the Urban Boundary Layer, ADVANCES IN ATMOSPHERIC SCIENCES, 15, 258-266. doi: 10.1007/s00376-998-0044-2
[12] Xiaojuan LIU, Guangjin TIAN, Jinming FENG, Bingran MA, Jun WANG, Lingqiang KONG, 2018: Modeling the Warming Impact of Urban Land Expansion on Hot Weather Using the Weather Research and Forecasting Model: A Case Study of Beijing, China, ADVANCES IN ATMOSPHERIC SCIENCES, 35, 723-736. doi: 10.1007/s00376-017-7137-8
[13] WANG Gengchen, BAI Jianhui, KONG Qinxin, Alexander EMILENKO, 2005: Black Carbon Particles in the Urban Atmosphere in Beijing, ADVANCES IN ATMOSPHERIC SCIENCES, 22, 640-646. doi: 10.1007/BF02918707
[14] HE Yuting, JIA Gensuo, HU Yonghong, and ZHOU Zijiang, 2013: Detecting urban warming signals in climate records, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1143-1153. doi: 10.1007/s00376-012-2135-3
[15] Liu Huizhi, Sang Jianguo, Zhang Boyin, Johnny C.L. Chan, Andrew Y.S.Cheng, Liu Heping, 2002: Influences of Structures on Urban Ventilation:A Numerical Experiment, ADVANCES IN ATMOSPHERIC SCIENCES, 19, 1045-1054. doi: 10.1007/s00376-002-0063-3
[16] Jae-Jin KIM, Do-Yong KIM, 2009: Effects of a Building's Density on Flow in Urban Areas, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 45-56. doi: 10.1007/s00376-009-0045-9
[17] MIAO Shiguang, JIANG Weimei, 2004: Large Eddy Simulation and Study of the Urban Boundary Layer, ADVANCES IN ATMOSPHERIC SCIENCES, 21, 650-661. doi: 10.1007/BF02915732
[18] QIU Jinhuan, YANG Jingmei, 2008: Absorption Properties of Urban/Suburban Aerosols in China, ADVANCES IN ATMOSPHERIC SCIENCES, 25, 1-10. doi: 10.1007/s00376-008-0001-0
[19] JIANG Yujun, LIU Huizhi, SANG Jianguo, ZHANG Boyin, 2007: Numerical and Experimental Studies on Flow and Pollutant Dispersion in Urban Street Canyons, ADVANCES IN ATMOSPHERIC SCIENCES, 24, 111-125. doi: 10.1007/s00376-007-0111-0
[20] TAO Jun, CHENG Tiantao, ZHANG Renjian, CAO Junji, ZHU Lihua, WANG Qiyuan, LUO Lei, and ZHANG Leiming, 2013: Chemical composition of PM2.5 at an urban site of Chengdu in southwestern China, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1070-1084. doi: 10.1007/s00376-012-2168-7
Article Views: 795 Times
PDF downloads: 118 Times
Manuscript received: 21 April 2015
Manuscript revised: 29 December 2015
1. School of Atmospheric Sciences, Nanjing University, Nanjing 210093
2. Dalian Meteorological Station, Dalian 116000
Abstract: To evaluate the influence of urban non-uniformity on precipitation, the area of a city was divided into three categories (commercial, high-density residential, and low-density residential) according to the building density data from Landsat satellites. Numerical simulations of three corresponding scenarios (urban non-uniformity, urban uniformity, and non-urban) were performed in Nanjing using the WRF model. The results demonstrate that the existence of the city results in more precipitation, and that urban heterogeneity enhances this phenomenon. For the urban non-uniformity, uniformity, and non-urban experiments, the mean cumulative summer precipitation was 423.09 mm, 407.40 mm, and 389.67 mm, respectively. Urban non-uniformity has a significant effect on the amount of heavy rainfall in summer. The cumulative precipitation from heavy rain in the summer for the three numerical experiments was 278.2 mm, 250.6 mm, and 236.5 mm, respectively. In the non-uniformity experiments, the amount of precipitation between 1500 and 2200 (LST) increased significantly. Furthermore, the adoption of urban non-uniformity into the WRF model could improve the numerical simulation of summer rain and its daily variation.
Urbanization takes place at an exceptionally rapid rate in China. The areas of cities in China, particularly in the Beijing-Tianjin-Hebei, Yangtze River Delta, and Pearl River Delta urban agglomeration regions, have continued to grow. The process of urbanization has altered the natural surface and resulted in increases in anthropogenic heat and pollutant emissions, which inevitably have impacts on urban meteorological environments. Problems such as "urban heat islands" (UHIs), "dry islands", "rain islands" and "turbid islands", have arisen in association with urbanization.
Many studies have been conducted on these phenomena, and important progress has already been accomplished. However, the urban effect on precipitation is relatively complicated. Potential mechanisms of the effects of cities on precipitation include the dynamic actions of urban buildings and thermodynamic actions of UHIs altering the flow field characteristics, the impervious surfaces of urban areas affecting the evaporation process and influencing land-surface water vapor transfer, and urban air pollution (e.g., the direct and indirect effects of aerosols) affecting radiation and cloud microphysical processes. Studying the urban effect on precipitation has become a frontier of focus for the atmospheric sciences, with a large number of studies devoted to the subject (Jauregui and Romales, 1996; Lowry, 1998; Li et al., 2008). (Liao et al., 2011) analyzed the pattern of variation in precipitation in Guangzhou using daily observed precipitation data from 1959 to 2009 and found that the number of heavy rain days is increasing. The study conducted by (Zhang et al., 2007) determined that urban expansion reduces the coverage of natural vegetation, which further reduces surface evaporation and local water vapor supply. Meanwhile, it increases the boundary layer height and enhances the mixing of atmospheric water vapor, ultimately decreasing the amount of precipitation. (Sun et al., 2006) analyzed the formation mechanism and the role of the urban boundary layer in a relatively independent meso-β-scale convective rainstorm system in Beijing. The observational study conducted by (Chen et al., 2000) determined that precipitation in the Yangtze River Delta region increased in response to urbanization, while the temperature in surrounding regions decreased. Based on mesoscale weather dynamics theory and a scale analysis method, (Sun and Yang, 2008) found that a meso-β-scale rainstorm was affected by the interaction of the terrain and UHIs. (Shepherd et al., 2002) analyzed the distribution of summer precipitation in several cities in the U.S. from 1998 to 2000 using TRMM satellite data. They reported a 28% increase in the mean monthly precipitation at locations 30-60 km from cities in the downwind direction, and an approximate 5.6% increase in urban areas. (Rosenfeld, 2000) suggested that urbanization and industrial pollution would result in increased precipitation and snowfall in downstream locations. And finally, the results of the study conducted by (Wu et al., 2000) demonstrated that the most significant urban effect attributed to the thermal and dynamic effects of cities was the increase in short-term precipitation.
A high-resolution numerical simulation is a common and effective tool for studying the effects of cities on precipitation. As a result of its exceptional performance, the WRF model has been widely applied in the field of urban meteorology. Details on the WRF model can be found in (Skamarock et al., 2008). The WRF model classifies cities into three categories: commercial, high-density residential (hi-dens res), and low-density residential (low-dens res). The buildings in commercial cities are the tallest, whereas the buildings in low-dens res cities are the shortest. In most cases, when using WRF (the version in this study is 3.3.1) to simulate urban meteorology, a city is determined to belong to one of the aforementioned categories based on its building density (Song et al., 2009). However, due to the non-uniformity of urban density, all three types of urban area exist within a city. Therefore, it is difficult to conclude that a city belongs solely to one of these categories. (Hu et al., 2013) considered urban non-uniformity in a study on low-level jets. For the city of Nanjing, there are numerous skyscrapers in the downtown area, i.e., in Gulou and Xinjiekou, which is commercial. However, the south of the city is close to the Qinhuai River, which belongs to the hi-dens res category. And the eastern part, i.e., Xianlin, belongs to the low-dens res category. Categorizing a city into only one classification may cause relatively large deviation in the model. Some studies (Song et al., 2014a, b) show that the surface energy balance, temperature and wind are significantly influenced by urban non-uniformity, but there have been few studies on the link between urban non-uniformity and precipitation. Building dynamics, UHIs and aerosols have influences on precipitation. In addition, the former two affect each other. When urban non-uniformity is considered, urban dynamic parameters change, which also leads to a change in the UHI effect. Therefore, urban non-uniformity is an important factor and should be considered in precipitation simulation. In the present study, Nanjing was recategorized based on the characteristics of different areas of the city, and the potential effect of the urban non-uniformity on precipitation was investigated.
2.1. Model description
The WRF model system is composed of a new-generation mesoscale forecasting model and assimilation system, jointly established in 1997 by the Mesoscale & Microscale Meteorology Division of the NCAR, the Environmental Modeling Center at the NCEP, the Forecast Research Division of the Forecast Systems Laboratory, and the Center for Analysis and Prediction of Storms at the University of Oklahoma. The WRF model system has been applied extensively.
The urban canopy model (UCM) in the WRF model was used in the present study. The UCM includes the following features: (1) streets with 2D structures are parameterized to calculate their thermal characteristics; (2) the shadows and reflections of buildings are considered; (3) the courses of streets and the daily variation of the solar elevation angle are considered; (4) the thermal effects of road surfaces, wall surfaces and roofs are differentiated.
2.2. Non-uniform distribution of Nanjing
Three experiments (A, B and C) were designed for the present study. In experiment A, the non-uniformity of the city was considered; different areas of the city were classified into three categories: commercial, hi-dens res, and low-dens res. In experiment B, the city was considered to be uniform (hi-dens res). And in experiment C, the effect of a non-urban environment was considered; the original land surface types of the city were replaced by irrigated cropland and pasture.
The city classification was mainly based on its building density (Song et al., 2014a, Fig. 1). The building densities were obtained by statistically analyzing the 25-m resolution land surface type data developed by Landsat satellites. When the building density was less than 0.3, the area was defined as low-dens res; when it was greater than or equal to 0.45, the area was defined as commercial. The distributions of the land surface types in the third level of the model for experiments A and B can be found in Song et al. (2014a, Fig. 2). The commercial area accounted for 23.7% in the non-uniform category, while the hi-dens res and low-dens categories accounted for 41.2% and 35.1%, respectively. However, in the uniform category, the city area was completely hi-dens res. The land types of the central and peripheral areas of Nanjing change when the spatial variation is considered. Even though the height of the city decreases, the non-uniform distribution of the city increases.
Compared to a uniform city, the mean values of sensible heat, UHI and friction velocity are less in a non-uniform city. However, the extreme values are larger (Song et al., 2014a). This demonstrates that a non-uniform city provides weaker land-surface forcing, but enhances the land-surface forcing turbulence corresponding to a uniform city. Obviously, the enhancement of precipitation is due to the urban non-uniformity rather than the mean land-surface forcing.
Figure 1. Observed and simulated monthly mean precipitation at Nanjing station.
Figure 2. Spatial distribution of summer accumulated precipitation (mm) in the Nanjing region: (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The wind fields in the figure are the mean wind fields at 10 m for rainy summer days (m s$^-1$).
2.3. Experimental design
A three-level nested grid with a two-way nesting experiment was used for the simulation. The central longitude and latitude of the domain were 32.1004°N and 118.8986°E, respectively. The outer domain was 900 km × 900 km, with 9-km horizontal grid spacing; the middle domain was 303 km × 303 km, with 3-km spacing; and the innermost domain was 101 km × 101 km, with 1-km spacing. The model top was set at 100 hPa and there were 27 vertical layers. The simulation period was from 0000 UTC 1 January 2011 to 1800 UTC 31 December 2011, and the model ran month by month. The 1°× 1° resolution NCEP data were used as the boundary conditions, which were forced every 6 h. The result was output every hour. The model parameterization can be found in Song et al. (2014a, Table 1). The Multi-layer Building Environment Model, Noah land-surface, Monin-Obukhov, and unified Noah land-surface schemes were chosen in this study.
Figure 3. Frequency of summer (June, July and August) precipitation in the Nanjing region (%): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison.
3.1. Total summer precipitation
The precipitation for Jiangsu province in the summer of 2011 ranged from 247.1 mm (Guanyun) to 1243.6 mm (Jiangyin), mainly concentrated in the southern part of Jiangsu and over the Yangtze River. The mean was 731.2 mm, which was approximately 50% of the annual value. Figure 1 compares the simulated and observed total monthly precipitation (mm) in the three experiments. The surface data from a representative station in Nanjing (58238; coordinates: 31.93°N, 118.90°E) were selected as the observation. The observed annual precipitation amount in 2011 was 989.2 mm. The simulated precipitation amount for the station in Nanjing, based on the three experiments, was 810.4 mm, 723.2 mm and 625.7 mm, respectively. In winter, spring and fall, the precipitation simulated in the non-uniform, uniform and non-urban experiments was relatively close to the observation. However, in summer, the accumulated precipitation simulated in the uniform and non-urban experiments was significantly lower than observed, whereas that simulated in the non-uniform experiment was closest to the observation. The error of the monthly mean accumulated precipitation for 2011 was 14.9 mm in the non-uniform experiment, which was lower than that of the uniform experiment. Thus, the simulation accuracy regarding urban precipitation can be effectively increased when urban non-uniformity is considered in the WRF model.
Compared with uniform experiments, the accumulated precipitation for the 12 months of the non-uniform experiments increased by 0.68%, -0.33%, 0.50%, -2.40%, -1.13%, 14.51%, -6.81%, 2.42%, -0.31%, -0.45%, -2.41% and -0.52%, respectively. For summertime (June, July, August), the values were 14.51%, -6.81% and 2.42%, respectively. The differences among the three experiments were greatest in summer because the amount of precipitation is largest in this season. Therefore, the effect of urban non-uniformity on summer precipitation (June, July and August) is mainly discussed hereafter. Figure 2 presents the spatial distribution of the simulated accumulated precipitation in summer in the Nanjing region. It shows a mean northeasterly wind field on rainy days. The wind speed significantly decreased in the urban area, especially centrally in the non-uniform experiment. Compared with the uniform experiment, the airflows also converged at the center in the non-uniform experiment (Fig. 2d). The simulated summer mean accumulated precipitation amount for the entire region was 423.09 mm, 407.40 mm and 389.67 mm in experiments A, B and C, respectively. There was a significant difference among the simulations, and the non-uniform experiment result was the highest. Compared with the non-urban experiment, the existence of the city resulted in a significant increase in precipitation in the urban area and downstream locations. The distribution patterns of the summer precipitation were significantly different in the non-uniform and uniform experiments. Precipitation increased more significantly in the urban areas and the southeast in the non-uniform experiment, whereas it decreased in downstream areas. Besides, the maximum precipitation amount was greater in the non-uniform experiment.
Figure 4. Intensity of summer precipitation in the Nanjing region (mm h$^-1$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The wind fields in the figure are the mean wind fields at 10 m for rainy summer days (m s$^-1$).
Figure 5. Daily variation of summer precipitation in the Nanjing region: (a) observation data from Nanjing station; (b) simulation results.
Figure 6. Spatial distribution of accumulated precipitation on 20 July 2011, in Nanjing (mm): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The wind fields in the figure are the mean wind fields at 10 m on 20 July 2011 (m s$^-1$).
Figure 3 presents the spatial distributions of the simulated summer precipitation frequency (the ratio of total precipitation to summertime precipitation) in the Nanjing region. Compared to the non-urban experiment (experiment C), the precipitation frequency was higher in the urban areas of experiments A and B. In general, the presence of the city resulted in an increasing trend for the precipitation frequency in the urban area. Also, it increased in the southeastern direction of the city, but decreased, to a certain extent, in the northwestern direction.
Figure 4 presents the spatial distributions of the simulated summer precipitation intensity (the ratio of accumulated summer precipitation to total precipitation) in the Nanjing region. The distributions were relatively similar to the accumulated summer precipitation. The mean intensity of the precipitation simulated in the non-uniform, uniform and non-urban experiments was 1.37 mm h-1, 1.33 mm h-1 and 1.26 mm h-1, respectively.
Figure 7. Daily mean friction velocity on 20 July 2011 across the Nanjing region (m s$^-1$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison.
Figure 5a presents the observed accumulated precipitation every 6 h at Nanjing station. The data indicate that the occurrence of precipitation was greatest between 0200 and 0800 (LST), slightly lower between 0800 and 1400 (LST) and between 1400 and 2000 (LST), and lowest between 2000 and 0200 (LST). Figure 5b presents the daily variation of the summer mean accumulated precipitation for the entire Nanjing region. Compared to the observation, the lowest precipitation simulated by the three experiments occurred at night, while the highest occurred in the morning. In the morning, the precipitation was lowest in the non-uniform experiment. However, in the afternoon, the precipitation in the uniform and non-urban experiments was much lower than in the morning, while the non-uniform experiment results were much higher and equivalent to the simulations in the morning. Clearly, the results of the non-uniform experiment were better. Therefore, urban non-uniformity can significantly increase convective precipitation in summer afternoons, and improve the model's performance in terms of the pattern of daily precipitation.
In the present study, light, moderate and heavy rain were defined as daily precipitation from 0.02 to 10 mm, from 10 to 20 mm, and greater than 20 mm, respectively. The effects of the three experiments on these three grades of rain were also compared. There was no significant difference between the spatial distribution of accumulated precipitation simulated in the three experiments for light and moderate rain (data not shown). The mean accumulated precipitation for light rain simulated in experiments A, B and C was 73.5 mm, 79.9 mm and 75.3 mm, respectively. And for moderate rain, the values were 71.5 mm, 76.9 mm and 77.9 mm, respectively. However, there were significant differences among the accumulated precipitation amounts for heavy rain in the three experiments: 278.2 mm (experiment A); 250.6 mm (experiment B), and 236.5 mm (experiment C). Hence, the effect of urban non-uniformity on precipitation was primarily manifested during heavy precipitation events in summer.
3.2. Analysis of a precipitation event
Figure 6 shows the distribution of simulated precipitation for an event that occurred on 20 July 2011 in the Nanjing region. The precipitation and its intensity were smallest in the non-urban experiment and largest in the non-uniform experiment. In addition, the majority of precipitation was distributed in and around the urban area. For both the precipitation range and its intensity, the results of the uniform experiment (experiment B) were between those of the non-urban and non-uniform experiments. The mean accumulated precipitation simulated in the three experiments was 1.24 mm, 0.93 mm and 0.23 mm, respectively.
Figure 7 presents the spatial distributions of mean daily friction velocity on 20 July 2011. The friction velocity in the urban area was significantly greater than that in the suburban area. When urban non-uniformity was considered, the friction velocity exhibited a more complicated spatial distribution. The friction velocity increased obviously in the central urban area and also increased over downstream locations, even where the friction velocity was relatively low. Though the overall urban building height decreased, the non-uniformity of the urban distribution increased in the non-uniform city. Overall, the roughness increased, resulting in significant disturbances in the flow field across the urban area.
Figure 8. Daily mean vertical velocity profile on 20 July 2011 across the Nanjing region (cm s$^-1$) [vertical wind vector (w) $\times 25$; the blue lines at the bottom of (a, b, d) represent the region of the city]: (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The blue boxes reflect the regions that have significant updrafts. The wind fields in the figure are the mean wind fields at the vertical direction on 20 July 2011 (m s$^-1$).
Figure 9. Daily mean water vapor flux divergence at 850 hPa on 20 July 2011 across the Nanjing region (10$^-6$ kg m$^-2$ s$^-2$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The red box reflects the region that has significant water vapor divergence.
Figure 10. Daily mean vertical velocity at 700 hPa on 20 July 2011 across the Nanjing region (m s$^-1$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison.
Figure 8 presents the mean daily vertical velocity profiles in the y direction that passed through the center of the domain on 20 July 2011. Compared to experiments B and C, there was a significant updraft in the north of the city in experiment A (see the areas within the blue rectangles in Figs. 8a and d), which corresponded very well to the location of the precipitation.
Figure 9 presents the spatial distributions of daily mean water vapor flux divergence at 850 hPa on 20 July 2011. For experiment A, there was significant water vapor divergence in the precipitation area (the area within the red rectangle in Fig. 9a). The water vapor flux divergence in the precipitation was -0.268× 10-6, -0.055× 10-6 and -0.045× 10-6 kg m-2 s-2 in experiments A, B and C, respectively. The region of water vapor divergence within the precipitation area for the three experiments was 608 km2, 554 km2 and 543 km2, respectively. Hence, water vapor divergence was the most intense and covered the largest area in the non-uniform experiment.
Figure 10 presents the distributions of daily mean vertical velocity at 700 hPa on 20 July 2011. Compared to the other two experiments, there was significant upward movement in the precipitation area in experiment A, and the areas of positive vertical velocity were more concentrated. The mean upward velocity within the precipitation area at 700 hPa was 1.1 cm s-1, 0.1 cm s-1 and -0.5 cm s-1 for experiments A, B and C, respectively. In addition, the area of upward movement was 553 km2, 427 km2 and 268 km2, respectively. Hence, the upward velocity and area of upward movement was relatively fast and large, respectively, at 700 hPa in experiment A. The upward velocity and area of upward movement in experiment B were the second fastest and largest, respectively. In experiment C, the mean vertical movement was downward, and the area of upward movement was the smallest among the three experiments.
Mechanistically, the urban impact on precipitation involves dynamic, thermodynamic and chemical effects. The dynamic effects involve increases in surface roughness and enhancements to the drag and lift effects on the airflow. Thermodynamic effects encompass changes in the surface energy balance and the impact of the UHI on the structure of the urban boundary layer. And chemical effects mainly relate to artificial increases in the influence of aerosols on the microstructure of clouds——otherwise known as "aerosol indirect effects". In this study, the setup of the WRF model did not include chemical processes. Therefore, the impact of urban non-uniformity on precipitation was restricted to the other two aspects: thermodynamic and dynamic effects. Compared to the uniform city, the mean UHI intensity and heat flux of the non-uniform city were lower. Figure 11 shows the diurnal variation of the UHI intensity in the non-uniform and uniform experiments, indicating that the diurnal variation of the UHI could not explain the difference in the diurnal variation of precipitation between these two urban experimental setups (Fig. 5b). We believe that in this simulation, the influence of the UHI was not the main reason for the increased precipitation in the non-uniform city. Certainly, however, the UHI may play an important role in the increase of precipitation compared with the non-urban experiment (Miao et al., 2011).
In the experiments carried out in this work, the total volume of buildings in the non-uniform and uniform setups was 6.41 km3 and 6.07 km3, respectively. The volume in the non-uniform experiment was 5.6% higher, which was close to the percentage increase of precipitation (3.85%) in summer. However, heavy rain in summer increased by 11%——much more than the increase in building volume in the non-uniform experiment. This shows that the increase in summer precipitation was due to two aspects, the increase in buildings and the urban non-uniformity, and the effect of urban non-uniformity on convective precipitation was much greater than that on non-convective precipitation.
Figure 11. Daily variation of the summer UHI in the Nanjing region.
In the present study, sensitivity simulations using the WRF model were conducted to investigate the effect of urban non-uniformity on precipitation in Nanjing in 2011. The main findings can be summarized as follows:
(1) The effect of urban non-uniformity on precipitation was relatively small in winter, spring and fall, but relatively large in summer. The precipitation simulated in the non-uniform experiment was the most comparable to observations, implying that consideration of urban non-uniformity can significantly improve model performance in terms of urban summer precipitation.
(2) Urbanization will result in increases of total accumulated precipitation, precipitation intensity and precipitation frequency in urban areas, and this effect is further increased when urban non-uniformity is considered. The accumulated summer precipitation was 423.1 mm, 407.4 mm and 389.7 mm in the non-uniform, uniform and non-urban experiments, respectively. Therefore, the amount of precipitation simulated in the non-uniform experiment was largest.
(3) The simulated contribution of precipitation to heavy rain (daily accumulated precipitation >20 mm) in the non-uniform experiment was significantly higher. The summer mean accumulated precipitation for heavy rain was 278.19 mm, 250.61 mm and 236.54 mm in the three experiments, respectively. The effect on light rain and moderate rain was relatively small.
(4) When urban non-uniformity was considered, the precipitation in the morning decreased, but the precipitation between 1500 and 2200 (LST) increased significantly. The pattern of the daily variation was closest to observations in the non-uniform experiment.
(5) The effect of urban non-uniformity on precipitation is mainly realized through increased land surface roughness and surface friction velocity, which in turn increase the low-level water vapor divergence and enhances the mean upward velocity, promoting an increase in heavy precipitation in the afternoon.
It is important to note that in investigating the effect of urban non-uniformity on precipitation in this study, the urban non-uniformity was represented by only three categories. Furthermore, the dynamic and thermodynamic effects relating to urban non-uniformity were not separated. This will be the next step in our continuing research. | CommonCrawl |
Method of steepest descent
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
For the optimization algorithm, see Gradient descent.
The integral to be estimated is often of the form
$\int _{C}f(z)e^{\lambda g(z)}\,dz,$
where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold:
1. C′ passes through one or more zeros of the derivative g′(z),
2. the imaginary part of g(z) is constant on C′.
The method of steepest descent was first published by Debye (1909), who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by Riemann (1863) about hypergeometric functions. The contour of steepest descent has a minimax property, see Fedoryuk (2001) harvtxt error: no target: CITEREFFedoryuk2001 (help). Siegel (1932) described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
Basic idea
The method of steepest descent is a method to approximate a complex integral of the form
$I(\lambda )=\int _{C}f(z)e^{\lambda g(z)}\,\mathrm {d} z$
for large $\lambda \rightarrow \infty $, where $f(z)$ and $g(z)$ are analytic functions of $z$. Because the integrand is analytic, the contour $C$ can be deformed into a new contour $C'$ without changing the integral. In particular, one seeks a new contour on which the imaginary part of $g(z)={\text{Re}}[g(z)]+i\,{\text{Im}}[g(z)]$ is constant. Then
$I(\lambda )=e^{i\lambda {\text{Im}}\{g(z)\}}\int _{C'}f(z)e^{\lambda {\text{Re}}\{g(z)\}}\,\mathrm {d} z,$
and the remaining integral can be approximated with other methods like Laplace's method.[1]
Etymology
The method is called the method of steepest descent because for analytic $g(z)$, constant phase contours are equivalent to steepest descent contours.
If $g(z)=X(z)+iY(z)$ is an analytic function of $z=x+iy$, it satisfies the Cauchy–Riemann equations
${\frac {\partial X}{\partial x}}={\frac {\partial Y}{\partial y}}\qquad {\text{and}}\qquad {\frac {\partial X}{\partial y}}=-{\frac {\partial Y}{\partial x}}.$
Then
${\frac {\partial X}{\partial x}}{\frac {\partial Y}{\partial x}}+{\frac {\partial X}{\partial y}}{\frac {\partial Y}{\partial y}}=\nabla X\cdot \nabla Y=0,$
so contours of constant phase are also contours of steepest descent.
A simple estimate
Let f, S : Cn → C and C ⊂ Cn. If
$M=\sup _{x\in C}\Re (S(x))<\infty ,$
where $\Re (\cdot )$ denotes the real part, and there exists a positive real number λ0 such that
$\int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx<\infty ,$
then the following estimate holds:[2]
$\left|\int _{C}f(x)e^{\lambda S(x)}dx\right|\leqslant {\text{const}}\cdot e^{\lambda M},\qquad \forall \lambda \in \mathbb {R} ,\quad \lambda \geqslant \lambda _{0}.$
Proof of the simple estimate:
${\begin{aligned}\left|\int _{C}f(x)e^{\lambda S(x)}dx\right|&\leqslant \int _{C}|f(x)|\left|e^{\lambda S(x)}\right|dx\\&\equiv \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}e^{(\lambda -\lambda _{0})(S(x)-M)}\right|dx\\&\leqslant \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}\right|dx&&\left|e^{(\lambda -\lambda _{0})(S(x)-M)}\right|\leqslant 1\\&=\underbrace {e^{-\lambda _{0}M}\int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx} _{\text{const}}\cdot e^{\lambda M}.\end{aligned}}$
The case of a single non-degenerate saddle point
Basic notions and notation
Let x be a complex n-dimensional vector, and
$S''_{xx}(x)\equiv \left({\frac {\partial ^{2}S(x)}{\partial x_{i}\partial x_{j}}}\right),\qquad 1\leqslant i,\,j\leqslant n,$
denote the Hessian matrix for a function S(x). If
${\boldsymbol {\varphi }}(x)=(\varphi _{1}(x),\varphi _{2}(x),\ldots ,\varphi _{k}(x))$
is a vector function, then its Jacobian matrix is defined as
${\boldsymbol {\varphi }}_{x}'(x)\equiv \left({\frac {\partial \varphi _{i}(x)}{\partial x_{j}}}\right),\qquad 1\leqslant i\leqslant k,\quad 1\leqslant j\leqslant n.$
A non-degenerate saddle point, z0 ∈ Cn, of a holomorphic function S(z) is a critical point of the function (i.e., ∇S(z0) = 0) where the function's Hessian matrix has a non-vanishing determinant (i.e., $\det S''_{zz}(z^{0})\neq 0$).
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
Complex Morse lemma
The Morse lemma for real-valued functions generalizes as follows[3] for holomorphic functions: near a non-degenerate saddle point z0 of a holomorphic function S(z), there exist coordinates in terms of which S(z) − S(z0) is exactly quadratic. To make this precise, let S be a holomorphic function with domain W ⊂ Cn, and let z0 in W be a non-degenerate saddle point of S, that is, ∇S(z0) = 0 and $\det S''_{zz}(z^{0})\neq 0$. Then there exist neighborhoods U ⊂ W of z0 and V ⊂ Cn of w = 0, and a bijective holomorphic function φ : V → U with φ(0) = z0 such that
$\forall w\in V:\qquad S({\boldsymbol {\varphi }}(w))=S(z^{0})+{\frac {1}{2}}\sum _{j=1}^{n}\mu _{j}w_{j}^{2},\quad \det {\boldsymbol {\varphi }}_{w}'(0)=1,$
Here, the μj are the eigenvalues of the matrix $S_{zz}''(z^{0})$.
Proof of complex Morse lemma
The following proof is a straightforward generalization of the proof of the real Morse Lemma, which can be found in.[4] We begin by demonstrating
Auxiliary statement. Let f : Cn → C be holomorphic in a neighborhood of the origin and f (0) = 0. Then in some neighborhood, there exist functions gi : Cn → C such that
$f(z)=\sum _{i=1}^{n}z_{i}g_{i}(z),$
where each gi is holomorphic and
$g_{i}(0)=\left.{\tfrac {\partial f(z)}{\partial z_{i}}}\right|_{z=0}.$
From the identity
$f(z)=\int _{0}^{1}{\frac {d}{dt}}f\left(tz_{1},\cdots ,tz_{n}\right)dt=\sum _{i=1}^{n}z_{i}\int _{0}^{1}\left.{\frac {\partial f(z)}{\partial z_{i}}}\right|_{z=(tz_{1},\ldots ,tz_{n})}dt,$
we conclude that
$g_{i}(z)=\int _{0}^{1}\left.{\frac {\partial f(z)}{\partial z_{i}}}\right|_{z=(tz_{1},\ldots ,tz_{n})}dt$
and
$g_{i}(0)=\left.{\frac {\partial f(z)}{\partial z_{i}}}\right|_{z=0}.$
Without loss of generality, we translate the origin to z0, such that z0 = 0 and S(0) = 0. Using the Auxiliary Statement, we have
$S(z)=\sum _{i=1}^{n}z_{i}g_{i}(z).$
Since the origin is a saddle point,
$\left.{\frac {\partial S(z)}{\partial z_{i}}}\right|_{z=0}=g_{i}(0)=0,$
we can also apply the Auxiliary Statement to the functions gi(z) and obtain
$S(z)=\sum _{i,j=1}^{n}z_{i}z_{j}h_{ij}(z).$
(1)
Recall that an arbitrary matrix A can be represented as a sum of symmetric A(s) and anti-symmetric A(a) matrices,
$A_{ij}=A_{ij}^{(s)}+A_{ij}^{(a)},\qquad A_{ij}^{(s)}={\tfrac {1}{2}}\left(A_{ij}+A_{ji}\right),\qquad A_{ij}^{(a)}={\tfrac {1}{2}}\left(A_{ij}-A_{ji}\right).$
The contraction of any symmetric matrix B with an arbitrary matrix A is
$\sum _{i,j}B_{ij}A_{ij}=\sum _{i,j}B_{ij}A_{ij}^{(s)},$
(2)
i.e., the anti-symmetric component of A does not contribute because
$\sum _{i,j}B_{ij}C_{ij}=\sum _{i,j}B_{ji}C_{ji}=-\sum _{i,j}B_{ij}C_{ij}=0.$
Thus, hij(z) in equation (1) can be assumed to be symmetric with respect to the interchange of the indices i and j. Note that
$\left.{\frac {\partial ^{2}S(z)}{\partial z_{i}\partial z_{j}}}\right|_{z=0}=2h_{ij}(0);$
hence, det(hij(0)) ≠ 0 because the origin is a non-degenerate saddle point.
Let us show by induction that there are local coordinates u = (u1, ... un), z = ψ(u), 0 = ψ(0), such that
$S({\boldsymbol {\psi }}(u))=\sum _{i=1}^{n}u_{i}^{2}.$
(3)
First, assume that there exist local coordinates y = (y1, ... yn), z = φ(y), 0 = φ(0), such that
$S({\boldsymbol {\phi }}(y))=y_{1}^{2}+\cdots +y_{r-1}^{2}+\sum _{i,j=r}^{n}y_{i}y_{j}H_{ij}(y),$
(4)
where Hij is symmetric due to equation (2). By a linear change of the variables (yr, ... yn), we can assure that Hrr(0) ≠ 0. From the chain rule, we have
${\frac {\partial ^{2}S({\boldsymbol {\phi }}(y))}{\partial y_{i}\partial y_{j}}}=\sum _{l,k=1}^{n}\left.{\frac {\partial ^{2}S(z)}{\partial z_{k}\partial z_{l}}}\right|_{z={\boldsymbol {\phi }}(y)}{\frac {\partial \phi _{k}}{\partial y_{i}}}{\frac {\partial \phi _{l}}{\partial y_{j}}}+\sum _{k=1}^{n}\left.{\frac {\partial S(z)}{\partial z_{k}}}\right|_{z={\boldsymbol {\phi }}(y)}{\frac {\partial ^{2}\phi _{k}}{\partial y_{i}\partial y_{j}}}$
Therefore:
$S''_{yy}({\boldsymbol {\phi }}(0))={\boldsymbol {\phi }}'_{y}(0)^{T}S''_{zz}(0){\boldsymbol {\phi }}'_{y}(0),\qquad \det {\boldsymbol {\phi }}'_{y}(0)\neq 0;$
whence,
$0\neq \det S''_{yy}({\boldsymbol {\phi }}(0))=2^{r-1}\det \left(2H_{ij}(0)\right).$
The matrix (Hij(0)) can be recast in the Jordan normal form: (Hij(0)) = LJL−1, were L gives the desired non-singular linear transformation and the diagonal of J contains non-zero eigenvalues of (Hij(0)). If Hij(0) ≠ 0 then, due to continuity of Hij(y), it must be also non-vanishing in some neighborhood of the origin. Having introduced ${\tilde {H}}_{ij}(y)=H_{ij}(y)/H_{rr}(y)$, we write
${\begin{aligned}S({\boldsymbol {\varphi }}(y))=&y_{1}^{2}+\cdots +y_{r-1}^{2}+H_{rr}(y)\sum _{i,j=r}^{n}y_{i}y_{j}{\tilde {H}}_{ij}(y)\\=&y_{1}^{2}+\cdots +y_{r-1}^{2}+H_{rr}(y)\left[y_{r}^{2}+2y_{r}\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)+\sum _{i,j=r+1}^{n}y_{i}y_{j}{\tilde {H}}_{ij}(y)\right]\\=&y_{1}^{2}+\cdots +y_{r-1}^{2}+H_{rr}(y)\left[\left(y_{r}+\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)\right)^{2}-\left(\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)\right)^{2}\right]+H_{rr}(y)\sum _{i,j=r+1}^{n}y_{i}y_{j}{\tilde {H}}_{ij}(y)\end{aligned}}$
Motivated by the last expression, we introduce new coordinates z = η(x), 0 = η(0),
$x_{r}={\sqrt {H_{rr}(y)}}\left(y_{r}+\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)\right),\qquad x_{j}=y_{j},\quad \forall j\neq r.$
The change of the variables y ↔ x is locally invertible since the corresponding Jacobian is non-zero,
$\left.{\frac {\partial x_{r}}{\partial y_{k}}}\right|_{y=0}={\sqrt {H_{rr}(0)}}\left[\delta _{r,\,k}+\sum _{j=r+1}^{n}\delta _{j,\,k}{\tilde {H}}_{jr}(0)\right].$
Therefore,
$S({\boldsymbol {\eta }}(x))={x}_{1}^{2}+\cdots +{x}_{r}^{2}+\sum _{i,j=r+1}^{n}{x}_{i}{x}_{j}W_{ij}(x).$
(5)
Comparing equations (4) and (5), we conclude that equation (3) is verified. Denoting the eigenvalues of $S''_{zz}(0)$ by μj, equation (3) can be rewritten as
$S({\boldsymbol {\varphi }}(w))={\frac {1}{2}}\sum _{j=1}^{n}\mu _{j}w_{j}^{2}.$
(6)
Therefore,
$S''_{ww}({\boldsymbol {\varphi }}(0))={\boldsymbol {\varphi }}'_{w}(0)^{T}S''_{zz}(0){\boldsymbol {\varphi }}'_{w}(0),$
(7)
From equation (6), it follows that $\det S''_{ww}({\boldsymbol {\varphi }}(0))=\mu _{1}\cdots \mu _{n}$. The Jordan normal form of $S''_{zz}(0)$ reads $S''_{zz}(0)=PJ_{z}P^{-1}$, where Jz is an upper diagonal matrix containing the eigenvalues and det P ≠ 0; hence, $\det S''_{zz}(0)=\mu _{1}\cdots \mu _{n}$. We obtain from equation (7)
$\det S''_{ww}({\boldsymbol {\varphi }}(0))=\left[\det {\boldsymbol {\varphi }}'_{w}(0)\right]^{2}\det S''_{zz}(0)\Longrightarrow \det {\boldsymbol {\varphi }}'_{w}(0)=\pm 1.$
If $\det {\boldsymbol {\varphi }}'_{w}(0)=-1$, then interchanging two variables assures that $\det {\boldsymbol {\varphi }}'_{w}(0)=+1$.
The asymptotic expansion in the case of a single non-degenerate saddle point
Assume
1. f (z) and S(z) are holomorphic functions in an open, bounded, and simply connected set Ωx ⊂ Cn such that the Ix = Ωx ∩ Rn is connected;
2. $\Re (S(z))$ has a single maximum: $\max _{z\in I_{x}}\Re (S(z))=\Re (S(x^{0}))$ for exactly one point x0 ∈ Ix;
3. x0 is a non-degenerate saddle point (i.e., ∇S(x0) = 0 and $\det S''_{xx}(x^{0})\neq 0$).
Then, the following asymptotic holds
$I(\lambda )\equiv \int _{I_{x}}f(x)e^{\lambda S(x)}dx=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}e^{\lambda S(x^{0})}\left(f(x^{0})+O\left(\lambda ^{-1}\right)\right)\prod _{j=1}^{n}(-\mu _{j})^{-{\frac {1}{2}}},\qquad \lambda \to \infty ,$
(8)
where μj are eigenvalues of the Hessian $S''_{xx}(x^{0})$ and $(-\mu _{j})^{-{\frac {1}{2}}}$ are defined with arguments
$\left|\arg {\sqrt {-\mu _{j}}}\right|<{\tfrac {\pi }{4}}.$
(9)
This statement is a special case of more general results presented in Fedoryuk (1987).[5]
Derivation of equation (8)
First, we deform the contour Ix into a new contour $I'_{x}\subset \Omega _{x}$ passing through the saddle point x0 and sharing the boundary with Ix. This deformation does not change the value of the integral I(λ). We employ the Complex Morse Lemma to change the variables of integration. According to the lemma, the function φ(w) maps a neighborhood x0 ∈ U ⊂ Ωx onto a neighborhood Ωw containing the origin. The integral I(λ) can be split into two: I(λ) = I0(λ) + I1(λ), where I0(λ) is the integral over $U\cap I'_{x}$, while I1(λ) is over $I'_{x}\setminus (U\cap I'_{x})$ (i.e., the remaining part of the contour I′x). Since the latter region does not contain the saddle point x0, the value of I1(λ) is exponentially smaller than I0(λ) as λ → ∞;[6] thus, I1(λ) is ignored. Introducing the contour Iw such that $U\cap I'_{x}={\boldsymbol {\varphi }}(I_{w})$, we have
$I_{0}(\lambda )=e^{\lambda S(x^{0})}\int _{I_{w}}f[{\boldsymbol {\varphi }}(w)]\exp \left(\lambda \sum _{j=1}^{n}{\tfrac {\mu _{j}}{2}}w_{j}^{2}\right)\left|\det {\boldsymbol {\varphi }}_{w}'(w)\right|dw.$
(10)
Recalling that x0 = φ(0) as well as $\det {\boldsymbol {\varphi }}_{w}'(0)=1$, we expand the pre-exponential function $f[{\boldsymbol {\varphi }}(w)]$ into a Taylor series and keep just the leading zero-order term
$I_{0}(\lambda )\approx f(x^{0})e^{\lambda S(x^{0})}\int _{\mathbf {R} ^{n}}\exp \left(\lambda \sum _{j=1}^{n}{\tfrac {\mu _{j}}{2}}w_{j}^{2}\right)dw=f(x^{0})e^{\lambda S(x^{0})}\prod _{j=1}^{n}\int _{-\infty }^{\infty }e^{{\frac {1}{2}}\lambda \mu _{j}y^{2}}dy.$
(11)
Here, we have substituted the integration region Iw by Rn because both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term.[7] The integrals in the r.h.s. of equation (11) can be expressed as
${\mathcal {I}}_{j}=\int _{-\infty }^{\infty }e^{{\frac {1}{2}}\lambda \mu _{j}y^{2}}dy=2\int _{0}^{\infty }e^{-{\frac {1}{2}}\lambda \left({\sqrt {-\mu _{j}}}y\right)^{2}}dy=2\int _{0}^{\infty }e^{-{\frac {1}{2}}\lambda \left|{\sqrt {-\mu _{j}}}\right|^{2}y^{2}\exp \left(2i\arg {\sqrt {-\mu _{j}}}\right)}dy.$
(12)
From this representation, we conclude that condition (9) must be satisfied in order for the r.h.s. and l.h.s. of equation (12) to coincide. According to assumption 2, $\Re \left(S_{xx}''(x^{0})\right)$ is a negatively defined quadratic form (viz., $\Re (\mu _{j})<0$) implying the existence of the integral ${\mathcal {I}}_{j}$, which is readily calculated
${\mathcal {I}}_{j}={\frac {2}{{\sqrt {-\mu _{j}}}{\sqrt {\lambda }}}}\int _{0}^{\infty }e^{-{\frac {\xi ^{2}}{2}}}d\xi ={\sqrt {\frac {2\pi }{\lambda }}}(-\mu _{j})^{-{\frac {1}{2}}}.$
Equation (8) can also be written as
$I(\lambda )=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}e^{\lambda S(x^{0})}\left(\det(-S_{xx}''(x^{0}))\right)^{-{\frac {1}{2}}}\left(f(x^{0})+O\left(\lambda ^{-1}\right)\right),$
(13)
where the branch of
${\sqrt {\det \left(-S_{xx}''(x^{0})\right)}}$
is selected as follows
${\begin{aligned}\left(\det \left(-S_{xx}''(x^{0})\right)\right)^{-{\frac {1}{2}}}&=\exp \left(-i{\text{ Ind}}\left(-S_{xx}''(x^{0})\right)\right)\prod _{j=1}^{n}\left|\mu _{j}\right|^{-{\frac {1}{2}}},\\{\text{Ind}}\left(-S_{xx}''(x^{0})\right)&={\tfrac {1}{2}}\sum _{j=1}^{n}\arg(-\mu _{j}),&&|\arg(-\mu _{j})|<{\tfrac {\pi }{2}}.\end{aligned}}$
Consider important special cases:
• If S(x) is real valued for real x and x0 in Rn (aka, the multidimensional Laplace method), then[8]
${\text{Ind}}\left(-S_{xx}''(x^{0})\right)=0.$
• If S(x) is purely imaginary for real x (i.e., $\Re (S(x))=0$ for all x in Rn) and x0 in Rn (aka, the multidimensional stationary phase method),[9] then[10]
${\text{Ind}}\left(-S_{xx}''(x^{0})\right)={\frac {\pi }{4}}{\text{sign }}S_{xx}''(x_{0}),$
where ${\text{sign }}S_{xx}''(x_{0})$ denotes the signature of matrix $S_{xx}''(x_{0})$, which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), Ind is related to the Maslov index see, e.g., Chaichian & Demichev (2001) and Schulman (2005).
The case of multiple non-degenerate saddle points
If the function S(x) has multiple isolated non-degenerate saddle points, i.e.,
$\nabla S\left(x^{(k)}\right)=0,\quad \det S''_{xx}\left(x^{(k)}\right)\neq 0,\quad x^{(k)}\in \Omega _{x}^{(k)},$
where
$\left\{\Omega _{x}^{(k)}\right\}_{k=1}^{K}$
is an open cover of Ωx, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions ρk(x) : Ωx → [0, 1], 1 ≤ k ≤ K, such that
${\begin{aligned}\sum _{k=1}^{K}\rho _{k}(x)&=1,&&\forall x\in \Omega _{x},\\\rho _{k}(x)&=0&&\forall x\in \Omega _{x}\setminus \Omega _{x}^{(k)}.\end{aligned}}$
Whence,
$\int _{I_{x}\subset \Omega _{x}}f(x)e^{\lambda S(x)}dx\equiv \sum _{k=1}^{K}\int _{I_{x}\subset \Omega _{x}}\rho _{k}(x)f(x)e^{\lambda S(x)}dx.$
Therefore as λ → ∞ we have:
$\sum _{k=1}^{K}\int _{{\text{a neighborhood of }}x^{(k)}}f(x)e^{\lambda S(x)}dx=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}\sum _{k=1}^{K}e^{\lambda S\left(x^{(k)}\right)}\left(\det \left(-S_{xx}''\left(x^{(k)}\right)\right)\right)^{-{\frac {1}{2}}}f\left(x^{(k)}\right),$
where equation (13) was utilized at the last stage, and the pre-exponential function f (x) at least must be continuous.
The other cases
When ∇S(z0) = 0 and $\det S''_{zz}(z^{0})=0$, the point z0 ∈ Cn is called a degenerate saddle point of a function S(z).
Calculating the asymptotic of
$\int f(x)e^{\lambda S(x)}dx,$
when λ → ∞, f (x) is continuous, and S(z) has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function S(z) into one of the multitude of canonical representations. For further details see, e.g., Poston & Stewart (1978) and Fedoryuk (1987).
Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.
The other cases such as, e.g., f (x) and/or S(x) are discontinuous or when an extremum of S(x) lies at the integration region's boundary, require special care (see, e.g., Fedoryuk (1987) and Wong (1989)).
Extensions and generalizations
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.
Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions.
See also
• Pearcey integral
• Stationary phase approximation
• Laplace's method
Notes
1. Bender, Carl M.; Orszag, Steven A. (1999). Advanced Mathematical Methods for Scientists and Engineers I. New York, NY: Springer New York. doi:10.1007/978-1-4757-3069-2. ISBN 978-1-4419-3187-0.
2. A modified version of Lemma 2.1.1 on page 56 in Fedoryuk (1987).
3. Lemma 3.3.2 on page 113 in Fedoryuk (1987)
4. Poston & Stewart (1978), page 54; see also the comment on page 479 in Wong (1989).
5. Fedoryuk (1987), pages 417-420.
6. This conclusion follows from a comparison between the final asymptotic for I0(λ), given by equation (8), and a simple estimate for the discarded integral I1(λ).
7. This is justified by comparing the integral asymptotic over Rn [see equation (8)] with a simple estimate for the altered part.
8. See equation (4.4.9) on page 125 in Fedoryuk (1987)
9. Rigorously speaking, this case cannot be inferred from equation (8) because the second assumption, utilized in the derivation, is violated. To include the discussed case of a purely imaginary phase function, condition (9) should be replaced by $\left|\arg {\sqrt {-\mu _{j}}}\right|\leqslant {\tfrac {\pi }{4}}.$
10. See equation (2.2.6') on page 186 in Fedoryuk (1987)
References
• Chaichian, M.; Demichev, A. (2001), Path Integrals in Physics Volume 1: Stochastic Process and Quantum Mechanics, Taylor & Francis, p. 174, ISBN 075030801X
• Debye, P. (1909), "Näherungsformeln für die Zylinderfunktionen für große Werte des Arguments und unbeschränkt veränderliche Werte des Index", Mathematische Annalen, 67 (4): 535–558, doi:10.1007/BF01450097, S2CID 122219667 English translation in Debye, Peter J. W. (1954), The collected papers of Peter J. W. Debye, Interscience Publishers, Inc., New York, ISBN 978-0-918024-58-9, MR 0063975
• Deift, P.; Zhou, X. (1993), "A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation", Ann. of Math., The Annals of Mathematics, Vol. 137, No. 2, vol. 137, no. 2, pp. 295–368, arXiv:math/9201261, doi:10.2307/2946540, JSTOR 2946540, S2CID 12699956.
• Erdelyi, A. (1956), Asymptotic Expansions, Dover.
• Fedoryuk, M. V. (2001) [1994], "Saddle point method", Encyclopedia of Mathematics, EMS Press.
• Fedoryuk, M. V. (1987), Asymptotic: Integrals and Series, Nauka, Moscow [in Russian].
• Kamvissis, S.; McLaughlin, K. T.-R.; Miller, P. (2003), "Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation", Annals of Mathematics Studies, Princeton University Press, vol. 154.
• Riemann, B. (1863), Sullo svolgimento del quoziente di due serie ipergeometriche in frazione continua infinita (Unpublished note, reproduced in Riemann's collected papers.)
• Siegel, C. L. (1932), "Über Riemanns Nachlaß zur analytischen Zahlentheorie", Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik, 2: 45–80 Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966.
• Translated in Deift, Percy; Zhou, Xin (2018), "On Riemanns Nachlass for Analytic Number Theory: A translation of Siegel's Uber", arXiv:1810.05198 [math.HO].
• Poston, T.; Stewart, I. (1978), Catastrophe Theory and Its Applications, Pitman.
• Schulman, L. S. (2005), "Ch. 17: The Phase of the Semiclassical Amplitude", Techniques and Applications of Path Integration, Dover, ISBN 0486445283
• Wong, R. (1989), Asymptotic approximations of integrals, Academic Press.
| Wikipedia |
\begin{document}
\title{An Entropy Stable $h/p$ Non-Conforming Discontinuous Galerkin Method with the Summation-by-Parts Property}
\titlerunning{An Entropy Stable $h/p$ Non-Conforming DG Method with the SBP Property}
\author{Lucas Friedrich \and Andrew R. Winters \and David C.~Del Rey Fern\'{a}ndez \and Gregor J. Gassner \and Matteo Parsani \and Mark H. Carpenter}
\institute{Lucas Friedrich (\email{[email protected]}) \and Andrew R. Winters \and Gregor J. Gassner \at Mathematical Institute, University of Cologne, Cologne, Germany \\ David C.~Del Rey Fern\'{a}ndez \at National Institute of Aerospace and Computational AeroSciences Branch, NASA Langley Research Center, Hampton, VA, USA \\ Mark H. Carpenter \at Computational AeroSciences Branch, NASA Langley Research Center, Hampton, VA, USA \\ Matteo Parsani \at King Abdullah University of Science and Technology (KAUST), Computer Electrical and Mathematical Science and Engineering Division (CEMSE), Extreme Computing Research Center (ECRC), Thuwal, Saudi Arabia}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} This work presents an entropy stable discontinuous Galerkin (DG) spectral element approximation for systems of non-linear conservation laws with general geometric $(h)$ and polynomial order $(p)$ non-conforming rectangular meshes. The crux of the proofs presented is that the nodal DG method is constructed with the collocated Legendre-Gauss-Lobatto nodes. This choice ensures that the derivative/mass matrix pair is a summation-by-parts (SBP) operator such that entropy stability proofs from the continuous analysis are discretely mimicked. Special attention is given to the coupling between non-conforming elements as we demonstrate that the standard mortar approach for DG methods does not guarantee entropy stability for non-linear problems, which can lead to instabilities. As such, we describe a precise procedure and modify the mortar method to guarantee entropy stability for general non-linear hyperbolic systems on $h/p$ non-conforming meshes. We verify the high-order accuracy and the entropy conservation/stability of fully non-conforming approximation with numerical examples. \end{abstract}
\keywords{Summation-by-Parts \and Discontinuous Galerkin \and Entropy Conservation \and Entropy Stability \and $h/p$ Non-Conforming Mesh \and Non-Linear Hyperbolic Conservation Laws}
\section{Introduction}
The non-conforming discontinuous Galerkin spectral element method (DGSEM), with respect to either mesh refinement introducing hanging nodes ($h$), varying the polynomial order ($p$) across elements or both ($h/p$), is attractive for problems with strong varying feature sizes across the computational domain because the number of degrees of freedom can be significantly reduced. Past work has demonstrated that the mortar method \cite{Kopriva1996b,Kopriva2002} is a projection based approach to construct the numerical flux at non-conforming element interfaces. The mortar approach retains high-order accuracy as well as the desirable excellent parallel computing properties of the DGSEM \cite{Tan2012,Hindenlang201286}. However, we are in particular interested in building a high order DG scheme with the aforementioned positive properties that is provably entropy stable for general non-linear problems. That is, the non-conforming DGSEM should satisfy the second law of thermodynamics discretely. Our interest is twofold: \begin{enumerate} \item The numerical approximation will obey one of the most fundamental physical laws. \item For under-resolved flow configurations, like turbulence, entropy stable approximations have been shown to be robust, e.g. \cite{Bohm2017,Gassner2016,Yee2017,Winters2017}. \end{enumerate} The subject of non-conforming approximations is natural in the context of applications that contain a wide variety of spatial scales. This is because non-conforming methods can focus the degrees of freedom in a discretization where they are needed. There is some work available for entropy stable $p$ non-conforming DG methods applied to the compressible Navier-Stokes equations, e.g. Parsani et al. \cite{Parsani2016,Parsani2015b} or Carpenter et al. \cite{Carpenter2016}.
This work presents an extension of entropy stable non-conforming DG methods to include the hanging nodes ($h$) and the combination of varying polynomials and hanging mesh nodes ($h/p$) for general non-linear systems of conservation laws. We demonstrate that the derivative matrix in the DG context must satisfy the summation-by-parts (SBP) property as well as how to modify the mortar method \cite{Kopriva1996b} to guarantee high-order accuracy and entropy stability on rectangular meshes. As the algorithm of the method is still similar to the mortar approach, parallel scaling efficiency is not influenced by the modifications.
We begin with a short overview of the different DG approaches on rectangular meshes. First, we provide a background of the non-linear entropy stable DGSEM on conforming quadrilateral meshes. We then introduce the popular mortar approach in the nodal DG context. However, we demonstrate that this well-known non-conforming coupling is insufficient to guarantee entropy stability for non-linear partial differential equations (PDEs). The main result of this work is to marry these two powerful approaches, i.e., entropy stability of conforming DG methods and non-conforming coupling, to create a novel, entropy stable, high-order, non-conforming DGSEM for non-linear systems of conservation laws.
\subsection{Entropy Stable Conforming DGSEM}
We consider systems of non-linear hyperbolic conservation laws in a two dimensional spatial domain $\Omega\subset\mathbb{R}^2$ with $t\in\mathbb{R}^+$ \begin{equation}\label{eq:2DconsLaw} \bm{u}_t + \bm{f}_x\!\left(\bm{u}\right) + \bm{g}_y\!\left(\bm{u}\right) = \bm{0}, \end{equation} with suitable initial and boundary conditions. The extension to a three dimensional spatial domain follows immediately. Here, $\bm{u}$ is the vector of conserved variables and $\bm{f},\bm{g}$ are the non-linear flux vectors. Examples of \eqref{eq:2DconsLaw} are numerous, including, e.g., the shallow water equations and the compressible Euler equations. The entropy of a non-linear hyperbolic system is an auxiliary conservation law for smooth solutions (and an inequality for discontinuous solutions), see \cite{Tadmor1987_2,Tadmor2003} for details. Given a strongly convex entropy function, $S=S(\bm u)$, there exists a set of entropy variables defined as \begin{equation}\label{eq:entVars} \bm{v} = \frac{\partialS}{\partial \bm{u}}. \end{equation} Contracting the system of conservation laws \eqref{eq:2DconsLaw} from the left by the new set of variables \eqref{eq:entVars} yields a scalar conservation law for smooth solutions \begin{equation}\label{eq:newSys} \bm{v}^T\left(\bm{u}_t+ \bm{f}_x(\bm{u})+ \bm{g}_y(\bm{u})\right)= S_t + F_x + G_y= 0, \end{equation} provided certain compatibility conditions are satisfied between the physical fluxes $\bm{f}, \bm{g}$ and the entropy fluxes $F,G$ \cite{Tadmor1987_2,Tadmor2003}. In the presence of discontinuities the mathematical entropy decays \cite{Tadmor1987_2,Tadmor2003} and satisfies the inequality \begin{equation}\label{eq:newIneq} S_t + F_x + G_y \leq 0, \end{equation} in the sense of weak solutions to the non-linear PDE \cite{evans2010,Tadmor2003}. The final goal in this subsection is to determine a high-order DGSEM that is entropy stable on conforming meshes.
We first provide a brief overview for the derivation of the standard nodal DGSEM on rectangular grids. Complete details can be found in the book of Kopriva \cite{Kopriva:2009nx}. The DGSEM is derived from the weak form of the conservation laws \eqref{eq:2DconsLaw}. Thus, we multiply by an arbitrary ${L}_2(\Omega)$ test function $\varphi$ and integrate over the domain \begin{equation}\label{eq:weakForm} \int\limits_{\Omega}\left(\bm{u}_t+\bm{f}_x+\bm{g}_y\right)\varphi\,\mathrm{d}x\mathrm{d}y = \bm{0}, \end{equation} where, for convenience, we suppress the $\bm{u}$ dependence of the non-linear flux vectors. We subdivide the domain $\Omega$ into $K$ non-overlapping, geometrically conforming rectangular elements \begin{equation} E_k = \left[x_{k,1},x_{k,2}\right]\times[y_{k,1},y_{k,2}],\quad k = 1,\ldots,K. \end{equation} This divides the integral over the whole domain into the sum of the integrals over the elements. So, each element contributes \begin{equation}\label{eq:weakForm2} \int\limits_{E_k}\left(\bm{u}_t+\bm{f}_x+\bm{g}_y\right)\varphi\,\mathrm{d}x\mathrm{d}y = \bm{0},\quad k = 1,\ldots K, \end{equation} to the total integral. Next, we create a scaling transformation between the reference element $E_0=[-1,1]^2$ and each element, $E_k$. For rectangular meshes we create mappings $(X_k,Y_k):E_0 \rightarrow E_k$ such that $\left(X_k(\xi),Y_k(\eta)\right) = (x,y)$ are defined as \begin{equation}
X_k(\xi) = x_{k,1} + \frac{\xi +1}{2}\Delta x_k,\quad Y_k(\eta) = y_{k,1} + \frac{\eta +1}{2}\Delta y_k, \label{eq:mapping} \end{equation} for $k=1,\ldots,K$ where $\Delta x_k=\left(x_{k,2}-x_{k,1}\right)$ and $\Delta y_k = \left(y_{k,2}-y_{k,1}\right)$. Under the transformation \eqref{eq:mapping} the conservation law in physical coordinates \eqref{eq:2DconsLaw} becomes a conservation law in reference coordinates \cite{Kopriva:2009nx} \begin{equation} \bm{u}_t + \frac{1}{J}\left[\tilde{\bm{f}}_{\xi} + \tilde{\bm{g}}_{\eta}\right] = \bm{0}, \end{equation} where \begin{equation} J=\frac{\Delta x_k\Delta y_k}{4},\quad\tilde{\bm{f}} = \frac{\Delta y_k}{2}\bm{f},\quad\tilde{\bm{g}} = \frac{\Delta x_k}{2}\bm{g}, \end{equation} and $k=1,\ldots,K$.
We select the test function $\varphi$ to be a piecewise polynomial of degree $N$ in each spatial direction \begin{equation}\label{eq:testFunction} \varphi^k = \sum_{i=0}^N\sum_{j=0}^N\varphi_{ij}^k\ell_i(\xi)\ell_j(\eta), \end{equation} on each spectral element $E_k$, but do not enforce continuity at the element boundaries. The interpolating Lagrange basis functions are defined by \begin{equation} \ell_i(\xi) = \prod_{\stackrel{j = 0}{j \neq i}}^N \frac{\xi-\xi_j}{\xi_i-\xi_j} \quad\text{for}\quad i=0,\ldots,N, \label{eq:Lagrange} \end{equation} with a similar definition in the $\eta$ direction. The values of $\varphi^k_{ij}$ on each element $E_k$ are arbitrary and linearly independent, therefore the formulation \eqref{eq:weakForm2} is \begin{equation} \int\limits_{E_0}\left(J\bm{u}_t+\tilde{\bm{f}}_{\xi}+\tilde{\bm{g}}_{\eta}\right)\ell_i(\xi)\ell_j(\eta)\,\mathrm{d}\xi\mathrm{d}\eta = \bm{0}, \end{equation} where $i,j=0,\ldots,N$.
We approximate the conservative vector $\bm{u}$ and the contravariant fluxes $\tilde{\bm{f}}$, $\tilde{\bm{g}}$ with the same polynomial interpolants of degree $N$ in each spatial direction written in Lagrange form, e.g., \begin{equation} \begin{aligned}
\bm{u}(x,y,t)|_{E_k} &= \bm{u}(\xi,\eta,t) \approx \sum_{i,j = 0}^N \bm{U}_{ij} \ell_i(\xi) \ell_j(\eta)\equiv \bm{U}, \\
\tilde{\bm{f}}\left(\bm{u}(x,y,t)\right)|_{E_k} &= \tilde{\bm{f}}(\xi,\eta,t) \approx \sum_{i,j = 0}^N \tilde{\bm{F}}_{ij} \ell_i(\xi) \ell_j(\eta)\equiv \tilde{\bm{F}}. \label{eq:DG-approx} \end{aligned} \end{equation} Any integrals present in the DG approximation are approximated with a high-order Legendre-Gauss-Lobatto (LGL) quadrature rule, e.g., \begin{equation}\label{eq:quadrature} \int\limits_{E_0} J\bm{U}_t\ell_i(\xi)\ell_j(\eta)\,\mathrm{d}\xi\mathrm{d}\eta\approx J\!\!\!\sum_{n,m=0}^N\left(\sum_{p,q=0}^N\left(\bm{U}_t\right)_{pq}\ell_p(\xi_n)\ell_q(\eta_m)\right)\ell_i(\xi_n)\ell_j(\eta_m)\omega_n \omega_m = J\left(\vec{U}_t\right)_{ij}\omega_i \omega_j, \end{equation} where $\left\{\xi_i\right\}_{i=0}^N, \left\{\eta_j\right\}_{j=0}^N$ are the LGL quadrature nodes and $\left\{\omega_i\right\}_{i=0}^N,\left\{\omega_j\right\}_{j=0}^N$ are the LGL quadrature weights. Further, we \textit{collocate} the interpolation and quadrature nodes which enables us to exploit that the Lagrange basis functions \eqref{eq:Lagrange} are discretely orthogonal and satisfy the Kronecker delta property, i.e., $\ell_j(\xi_i) = \delta_{ij}$ with $\delta_{ij} = 1$ for $i=j$ and $\delta_{ij}=0$ for $i\neq j$ to simplify \eqref{eq:quadrature}.
For spectral element methods where the nodes include the boundary of the reference space ($\xi_0=\eta_0=-1$ and $\xi_N=\eta_N=1$), the discrete derivative matrix $\mat D$ and the discrete mass matrix $\mat M$ satisfy the summation-by-parts (SBP) property \cite{Carpenter1996} \begin{equation} \mat M \mat D + (\mat M \mat D)^T = \mat Q+\mat Q^T = \mat B := \text{diag} (-1,0,\ldots,0,1). \label{eq:SBP} \end{equation} By considering LGL quadrature, we obtain a diagonal mass matrix \begin{equation} \mat M=\text{diag} (\omega_0, \ldots , \omega_N), \label{Mmatrix} \end{equation} with positive weights for any polynomial order \cite{Gassner2013}. Note, that the mass matrix is constructed by performing mass lumping. We also define the SBP matrix $\mat Q$ and the boundary matrix $\mat B$ in \eqref{eq:SBP}. The SBP property \eqref{eq:SBP} gives the relation \begin{equation}\label{eq:otherSBP} \mat D = \mat M^{-1}\mat B - \mat M^{-1}\mat D^T\mat M, \end{equation} where we use the fact that the mass matrix $\mat M$ is positive definite and invertible.
By rewriting the polynomial derivative matrix as \eqref{eq:otherSBP} we can move discrete derivatives off the contravariant fluxes and onto the test function. This generates surface and volume contributions in the approximation. To resolve the discontinuities that naturally occur at element interfaces in DG methods we introduce the numerical flux functions $\tilde{\bm{F}}^*,\tilde{\bm{G}}^*$. We apply the SBP property \eqref{eq:otherSBP} again to move derivatives off the test function back onto the contravariant fluxes. This produces the strong form of the nodal DGSEM \begin{equation}\label{eq:standardDG} \begin{aligned} J\left(\bm{U}_t\right)_{ij} &+ \frac{1}{\mat M_{ii}}\left(\delta_{iN}\left[\tilde{\bm{F}}^{*}(1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{Nj}\right] - \delta_{i0}\left[\tilde{\bm{F}}^{*}(-1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{0j}\right]\right)+\sum_{m=0}^N \mat D_{im}\tilde{\bm{F}}_{mj}\\ &+ \frac{1}{\mat M_{jj}}\left(\delta_{jN}\left[\tilde{\bm{G}}^{*}(\xi_i,1;\hat{n}) - \tilde{\bm{G}}_{iN}\right] - \delta_{j0}\left[\tilde{\bm{G}}^{*}(\xi_i,-1;\hat{n}) - \tilde{\bm{G}}_{i0}\right]\right)+\sum_{m=0}^N \mat D_{jm}\tilde{\bm{G}}_{im}=\bm{0},\\ \end{aligned} \end{equation} for each LGL node with $i,j=0,\ldots,N$. We introduce notation in \eqref{eq:standardDG} for the evaluation of the contravariant numerical flux functions in the normal direction along each edge of the reference element at the relevant LGL nodes, e.g. $\tilde{\bm{F}}^{*}(1,\eta_j;\hat{n})$ for $j=0,\ldots,N$. Note that selecting the test function to be the tensor product basis \eqref{eq:testFunction} decouples the derivatives in each spatial direction.
Next, we extend the standard strong form DGSEM \eqref{eq:standardDG} into a split form DGSEM \cite{Carpenter2014,Gassner2016} framework. Split formulations of the DG approximation offer increased robustness, e.g. \cite{Gassner2016,Yee2017}, as well as increased flexibility in the DGSEM to satisfy auxiliary properties such as entropy conservation or entropy stability \cite{Carpenter2014,Gassner2016,Ray2017}. To create a split form DGSEM we rewrite the contributions of the volume integral, for example in the $\xi-$direction, by \begin{equation}\label{eq:newVolInt} \sum_{m=0}^N \mat D_{im}\tilde{\bm{F}}_{mj} \approx 2\sum_{m=0}^N \mat D_{im}\tilde{\bm{F}}^{\#}\left(\bm{U}_{ij},\bm{U}_{mj}\right), \end{equation} for $i,j=0,\ldots,N$ where we introduce a two-point, symmetric numerical volume flux $\tilde{\bm{F}}^{\#}$ \cite{Gassner2016}. This step creates a baseline split form DGSEM \begin{equation}\label{eq:splitDG} \resizebox{\hsize}{!}{$ \begin{aligned} J\left(\bm{U}_t\right)_{ij} &+ \frac{1}{\mat M_{ii}}\left(\delta_{iN}\left[\tilde{\bm{F}}^{*}(1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{Nj}\right] - \delta_{i0}\left[\tilde{\bm{F}}^{*}(-1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{0j}\right]\right)+2\sum_{m=0}^N \mat D_{im}\tilde{\bm{F}}^{\#}\left(\bm{U}_{ij},\bm{U}_{mj}\right)\\ &+ \frac{1}{\mat M_{jj}}\left(\delta_{jN}\left[\tilde{\bm{G}}^{*}(\xi_i,1;\hat{n}) - \tilde{\bm{G}}_{iN}\right] - \delta_{j0}\left[\tilde{\bm{G}}^{*}(\xi_i,-1;\hat{n}) - \tilde{\bm{G}}_{i0}\right]\right)+2\sum_{m=0}^N \mat D_{jm}\tilde{\bm{G}}^{\#}\left(\bm{U}_{ij},\bm{U}_{im}\right)=\bm{0},\\ \end{aligned}$} \end{equation} that can be used to create an entropy conservative/stable approximation. All that remains is the precise definition of the numerical surface and volume flux functions.
The construction of a high-order entropy conserving/stable DGSEM relies on the fundamental finite volume framework developed by Tadmor \cite{tadmor:1984,Tadmor1987_2}. An entropy conservative (EC) numerical flux function in the $\xi-$direction, $f^{\mathrm{EC}}$, is derived by satisfying the condition \cite{Tadmor2003} \begin{equation}\label{eq:entCondition} \jump{\bm{v}}^T\bm{f}^{\mathrm{EC}} = \jump{\Psi^f}, \end{equation} where $\bm{v}$ are the entropy variables \eqref{eq:entVars}, $\Psi^{f}$ is the entropy flux potential \begin{equation}\label{eq:entPotential} \Psi^f = \bm{v}\cdot\bm{f} - F, \end{equation} and \begin{equation}\label{jump} \jump{\cdot} = (\cdot)_R - (\cdot)_L, \end{equation} is the jump operator between a left and right state. Note that \eqref{eq:entCondition} is a single condition on the numerical flux vector $\bm{f}^{\mathrm{EC}}$, so there are many potential solutions for the entropy conserving flux vector. However, we reduce the number of possible solutions with the additional requirement that the numerical flux must be consistent, i.e. $\bm{f}^{\mathrm{EC}}(\bm{u},\bm{u}) = \bm{f}(\bm{u})$. Many such entropy conservative numerical flux functions are available for systems of hyperbolic conservation laws, e.g. the Euler equations \cite{Chandra2013,Ismail2009}. The entropy conservative flux function creates a baseline scheme to which dissipation can be added and guarantee discrete satisfaction of the entropy inequality (entropy stability), e.g. \cite{Chandra2013,Fjordholm2011,Wintermeyer2016}.
Remarkably, Fisher et al. \cite{Fisher2013} and Fisher and Carpenter \cite{Fisher2013b} demonstrated that selecting an entropy conservative finite volume flux for the numerical surface and volume fluxes in a high-order SBP discretization is enough to guarantee that the property of entropy conservation remains. As mentioned earlier, the DGSEM constructed on the LGL nodes is an SBP method. Entropy stability of the high-order DG approximation is guaranteed by adding proper numerical dissipation in the numerical surface fluxes, similar to the finite volume case. Thus, the final form of the entropy conservative DGSEM on conforming meshes is \begin{equation}\label{eq:ECDG} \resizebox{\hsize}{!}{$ \begin{aligned} J\left(\bm{U}_t\right)_{ij} &+ \frac{1}{\mat M_{ii}}\left(\delta_{iN}\left[\tilde{\bm{F}}^{\mathrm{EC}}(1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{Nj}\right] - \delta_{i0}\left[\tilde{\bm{F}}^{\mathrm{EC}}(-1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{0j}\right]\right)+2\sum_{m=0}^N \mat D_{im}\tilde{\bm{F}}^{\mathrm{EC}}\left(\bm{U}_{ij},\bm{U}_{mj}\right)\\ &+ \frac{1}{\mat M_{jj}}\left(\delta_{jN}\left[\tilde{\bm{G}}^{\mathrm{EC}}(\xi_i,1;\hat{n}) - \tilde{\bm{G}}_{iN}\right] - \delta_{j0}\left[\tilde{\bm{G}}^{\mathrm{EC}}(\xi_i,-1;\hat{n}) - \tilde{\bm{G}}_{i0}\right]\right)+2\sum_{m=0}^N \mat D_{jm}\tilde{\bm{G}}^{\mathrm{EC}}\left(\bm{U}_{ij},\bm{U}_{im}\right)=\bm{0},\\ \end{aligned}$} \end{equation} where we have made the replacement of the numerical surface and volume fluxes to be a two-point, symmetric EC flux that satisfies \eqref{eq:entCondition}. \\
\begin{rem} We note that the entropy conservative DGSEM \eqref{eq:ECDG} is equivalent to a SBP finite difference method with boundary coupling through simultaneous approximation terms (SATs), e.g. \cite{Fisher2013b,Fisher2013}. \end{rem}
In summary, we demonstrated that special attention was required for the volume contribution in the nodal DGSEM to create a split form entropy conservative method. Additionally, the SBP property was necessary to apply previous results from Fisher et al. \cite{Fisher2013} and guarantee entropy conservation at high-order. For the conforming mesh case the surface contributions required little attention. We simply replaced the numerical surface flux with an appropriate EC flux from the finite volume community. However, we next consider non-conforming DG methods with the flexibility to have differing polynomial order or hanging nodes at element interfaces.
\subsection{Non-Conforming DGSEM}
We consider the standard DGSEM in strong form \eqref{eq:standardDG} to discuss the commonly used \textit{mortar method} for non-conforming high-order DG approximations \cite{Tan2012,Kopriva1996b}. The mortar method allows for the polynomial order to differ between elements (Fig. \ref{fig:NonConfMesh}(a)), sometimes called $p$ refinement or algebraic non-conforming, as well as meshes that contain hanging nodes (Fig. \ref{fig:NonConfMesh}(b)), sometimes called $h$ refinement or geometric non-conforming, or both for a fully $h/p$ non-conforming approach (Fig. \ref{fig:NonConfMesh}(c)). For ease of presentation we assume that the polynomial order within an element is the same in each spatial direction. Note, however, due to the tensor product decoupling of the approximation (e.g. \eqref{eq:ECDG}) the mortar method could allow the polynomial order to differ within an element in each direction $\xi$ and $\eta$ as well. \begin{figure}
\caption{Examples of simple meshes with (a) $p$ refinement (b) $h$ refinement or (c) $h/p$ refinement}
\label{fig:NonConfMesh}
\end{figure}
The key to the non-conforming spectral element approximation is how the numerical fluxes between neighbor interfaces are treated. In the conforming approximation of the previous section the interface points between two neighboring elements coincide while the numerical solution across the interface was discontinuous. This allowed for a straightforward definition of unique numerical surface fluxes to account for how information is transferred between neighbors. It is then possible to determine numerical surface fluxes that guaranteed entropy conservation/stability of the conforming approximation.
The only difference between the conforming and non-conforming approximations is precisely how the numerical surface fluxes are computed along the interfaces. In the non-conforming cases of $h/p$ refinement (Fig. \ref{fig:NonConfMesh}(a)-(c)), the interface nodes may not match. So, a point-by-point transfer of information cannot be made between an element and its neighbors. To remedy this the mortar method ``cements'' together the neighboring ``bricks'' by connecting them through an intermediate one-dimensional construct, denoted by $\Xi$, see Fig. \ref{fig:MortarIdea}(a)-(b). \begin{figure}
\caption{Diagram depicting communication of data to and from mortars between three non-conforming elements.}
\label{fig:MortarIdea}
\end{figure}
In this overview we only discuss the coupling of the $p$ refinement case (Fig. \ref{fig:NonConfMesh}(a)), but the process is similar for the $h$ refinement case and is nicely described by Kopriva \cite{Kopriva1996b}. Also, the extension to curvilinear elements is briefly outlined. We distinguish them as the polynomial order on the left, $N_L$, and right, $N_R$. The polynomial on the mortar is chosen to be $N_{\Xi} = \max(N_L,N_R)$ \cite{Kopriva1996b,Kopriva2002}. Without loss of generality we assume that $N_L<N_R$, as depicted in Fig. \ref{fig:NonConfMesh}(a), such that $N_{\Xi}=N_R$. The construction of the numerical flux at such a non-conforming interface follows three basic steps: \begin{enumerate} \item Because the polynomial order on the right ($R$) and the mortar match, we simply copy the data. From the left ($L$) element we use a discrete or exact $L_2$ projection to move the solution from the element onto the mortar $\Xi$. \item The node distributions on the mortar match and we compute the interface numerical flux similar to the conforming mesh case. \item Finally, we project the numerical flux from the mortar back to each of the elements. Again, the left element uses a discrete or exact $L_2$ projection and the right element simply copies the data. \end{enumerate} We collect these steps visually in Fig. \ref{fig:projections} and introduce the notation for the four projection operations to be $\mat{P}_{L2\Xi}$, $\mat{P}_{\Xi2L}$, $\mat{P}_{R2\Xi}$, $\mat{P}_{\Xi2R}$. For this example we note that the right to mortar and inverse projections are the appropriate sized identity matrix, i.e $\mat{P}_{R2\Xi} = \mat{P}_{\Xi2R} = \mat{I}_{N_R}$. We provide additional details in Appendix \ref{sec:App B} regarding the mortar method for $p$ non-conforming DG methods and clarify the difference between interpolation and projection operators. \begin{figure}
\caption{Schematic of mortar projections for the case of $p$ refinement.}
\label{fig:projections}
\end{figure}
\subsection{Interaction of the Standard Mortar Method with Entropy Conservative DGSEM}
With the machinery of the mortar method now in place to handle non-conforming interfaces we are equipped to revisit the discussion of the entropy conservative DGSEM. For linear problems, where entropy conservation becomes \textit{energy} conservation, it is known that the mortar method is sufficient to extend the energy conserving DG schemes to non-conforming meshes, e.g. \cite{Friedrich2016,Kozdon2016,Mattsson2010b}. This is because no non-linearities are present and there is no coupling of the left and right solution states in the central numerical flux. However, for non-linear problems we replace this simple central numerical flux with a more complicated entropy conservative numerical flux that features possible polynomial or rational non-linearities as well as strong cross coupling between the left and right solution states, e.g. \cite{Chandra2013,Fjordholm2011,Gassner2013}. This introduces complications when applying the standard mortar method to entropy conservative DG methods.
As a simple example, consider the Burgers' equation which is equipped with an entropy conservative numerical flux in the $\xi-$direction of the form \cite{Gassner2013} \begin{equation} F^{\mathrm{EC}} = \frac{1}{6}\left(U_L^2+U_LU_R+U_R^2\right). \end{equation} Continuing the assumption of $N_L<N_R$ from the previous subsection we find the numerical flux computed on the mortar is \begin{equation}\label{eq:BurgersFlux} F^{\mathrm{EC}}_{\Xi} = \frac{1}{6}\left[\left(\mat{P}_{L2\Xi} U_L\right)^2+\left(\mat{P}_{L2\Xi} U_L\right)U_R+U_R^2\right]. \end{equation} The back projections of the mortar numerical flux \eqref{eq:BurgersFlux} onto the left and right elements are \begin{equation} F^{\mathrm{EC}}_L = \mat{P}_{\Xi2L} F^{\mathrm{EC}}_{\Xi},\quad F^{\mathrm{EC}}_{R}=F^{\mathrm{EC}}_{\Xi}. \end{equation} However, it is clear that the projected numerical fluxes will exhibit unpredictable behavior with regards to entropy. For example, because the entropy conservative flux was derived for conforming meshes with point-to-point information transfer, it is not obvious how the operation to compute the square of the projection of $U_L$ and then $L_2$ project the numerical flux back to the left element will change the entropy.
The focus of this article is to remedy these issues and happily marry the entropy conservative DGSEM with a $h/p$ non-conforming mortar-type method. To achieve this goal requires careful consideration and construction of the projection operators to move solution information between non-conforming element neighbors. Our main results are presented in the next section. First, in Sec. \ref{sec:p}, we address the issues associated with $p$ refinement similar to Carpenter et al. \cite{Carpenter2016} only in the context of a split form DG framework. We build on the $p$ refinement result to construct projections that guarantee entropy conservation for in the case of $h$ refinement in Sec. \ref{sec:h}. Then, Sec. \ref{sec:Dissipation} describes how additional dissipation can be included at non-conforming interfaces to guarantee entropy stability. Finally, we verify the theoretical derivations through a variety of numerical test cases in Sec. \ref{sec:numResults}.
\section{Entropy Stable $h/p$ Non-Conforming DGSEM}\label{sec:interfaces}
Our goal is to develop a high-order numerical approximation that conserves the primary quantities of interest (like mass) as well as obey the second law of thermodynamics. In the continuous analysis, neglecting boundary conditions, we know for general solutions that the main quantities are conserved and the entropy can be dissipated (in the mathematical sense) \begin{equation}\label{BothConditionsCon} \int\limits_\Omega {u}^{q}_t~\textrm{d}\Omega = 0,\qquad \int\limits_\Omega \Ent_t~\textrm{d}\Omega \le 0, \end{equation} for each equation, $q=1,\ldots,M$, in the non-linear system. We aim to develop a DGSEM that mimics \eqref{BothConditionsCon} on rectangular meshes in the case of general $h/p$ non-conforming approximations.
As discussed previously, the most important component of a non-conforming method for entropy stable approximations is the coupling of the solution at interfaces through numerical fluxes. For convenience we clarify the notation of the numerical fluxes in the entropy conservative approximation \eqref{eq:ECDG} along interfaces in Fig. \ref{fig:Numflux}. \begin{figure}
\caption{Entropy conservative numerical fluxes at the interfaces of an element.}
\label{fig:Numflux}
\end{figure}
We seek an approximation that discretely preserves primary conservation and discrete entropy stability. The definition of this continuous property \eqref{BothConditionsCon} is translated into the discrete by summing over all elements to be \begin{align} \sum\limits_{\mathrm{all\,elements}} J\! \sum_{i,j=0}^N\omega_i\omega_j \left({U}^q_{t}\right)_{ij}& = {0}, \text{ (primary conservation)}\label{BothConditionsMom},\\ \sum\limits_{\mathrm{all\,elements}} J\! \sum_{i,j=0}^N\omega_i\omega_j \left(S_{t}\right)_{ij}& \le 0, \text{ (entropy stability)}\label{BothConditionsEntStab}, \end{align} where $\left(S_{t}\right)_{ij}$ is a discrete evaluation of the time derivative of the entropy function.
While our goal is the construction of an entropy stable scheme, we will first derive an entropy conservative scheme for smooth solutions, meaning that \begin{align} \sum\limits_{\mathrm{all\,elements}} J\! \sum_{i,j=0}^N\omega_i\omega_j \left(S_{t}\right)_{ij}=0, \text{ (entropy conservation)}.\label{BothConditionsEntCon} \end{align} After deriving an entropy conservative scheme we can obtain an entropy stable scheme by including carefully constructed dissipation within the numerical surface flux as described in Sec. \ref{sec:Dissipation}.
To derive an approximation which conserves the primary quantities and is entropy stable we must examine the discrete growth in the primary quantities and entropy in a single element. \\
\begin{lem}\label{thm1} We assume that the two point volume flux satisfies the entropy conservation condition \eqref{eq:entCondition}. The discrete growth on a single element of the primary quantities and the entropy of the DG discretization \eqref{eq:ECDG} are \begin{align} J \sum_{i,j=0}^N\omega_i\omega_j\left({U}^q_{t}\right)_{ij}=&-\sum_{j=0}^N\omega_j\left(\tilde{{F}}^{\mathrm{EC},q}_{Nj}-\tilde{{F}}^{\mathrm{EC},q}_{0j}\right)-\sum_{i=0}^N\omega_i\left(\tilde{{G}}^{\mathrm{EC},q}_{iN}-\tilde{{G}}^{\mathrm{EC},q}_{i0}\right),\label{ThmdU} \end{align} where $q=1,\ldots,M$ and \begin{align} J \sum_{i,j=0}^N\omega_i\omega_j\left(S_{t}\right)_{ij}=&-\sum_{j=0}^N\omega_j\left(\sum\limits_{q=1}^M{V}_{Nj}^{q}\tilde{{F}}^{\mathrm{EC},q}_{Nj}-\tilde{\Psi}^f_{Nj}-\left(\sum\limits_{q=1}^M{V}_{0j}^{q}\tilde{{F}}^{\mathrm{EC},q}_{0j}-\tilde{\Psi}^f_{0j}\right)\right)\notag\\ &-\sum_{i=0}^N\omega_i\left(\sum\limits_{q=1}^M{V}_{iN}^{q}\tilde{{G}}^{\mathrm{EC},q}_{iN}-\tilde{\Psi}^g_{iN}-\left(\sum\limits_{q=1}^M{V}_{i0}^{q}\tilde{{G}}^{\mathrm{EC},q}_{i0}-\tilde{\Psi}^g_{i0}\right)\right)\label{ThmdEnt}, \end{align} rescpectively. \end{lem} \begin{proof} The proof of \eqref{ThmdU} and \eqref{ThmdEnt} is given in Fisher et al. \cite{Fisher2013}, however, for completeness, we included the proof consistent to the current notation and formulations in Appendix \ref{sec:App A}. \end{proof}
We first examine the volume contributions of the entropy conservative approximation because when contracted into entropy space the volume terms move to the interfaces \cite{Fisher2013,Gassner2017} in the form of the entropy flux potential, i.e. \eqref{eq:entPotential}. Note, that the proof in Appendix \ref{sec:App A} concerns the contribution of the volume integral in the DGSEM and only depends on the interior of an element. Therefore, the result of Lemma \ref{thm1} holds for conforming as well as non-conforming meshes.
Therefore, to obtain a primary and entropy conservative scheme on the entire domain we need to choose an appropriate numerical surface flux. In comparison to the volume flux, the surface flux depends on the interfaces of the elements. Here, we need to differ between elements with conforming and non-conforming interfaces. We will first describe how to determine such a scheme for conforming interfaces, but differing polynomial orders. Then, we extend these results to consider meshes with non-conforming interfaces (hanging nodes).
\subsection{Conforming Interfaces}\label{sec:p}
In this section we will show how to create a fully conservative scheme on a standard conforming mesh, i.e. the polynomial orders match and there are no hanging nodes. As shown in \eqref{ThmdU} and \eqref{ThmdEnt} the primary conservation and entropy growth is only determined by the numerical surface fluxes on the interface. Here we exploit that the tensor product basis decouples the approximation in the two spatial directions and many of the proofs only address the $\xi-$direction because the contribution in the $\eta-$direction is done in an analogous fashion. Furthermore, the contribution at the four interfaces of an element follow similar steps. As such, we elect to consider all terms related to a single shared interface of a left and right element. \begin{figure}
\caption{Two neighboring elements with a single coinciding interface.}
\label{Confmesh}
\end{figure} For a simple example we present a two element mesh in Fig. \ref{Confmesh} and consider the coupling through the single shared interface. Due to Lemma \ref{thm1} the terms referring to the shared interface are \begin{align} \dTotCon^I&=\sum_{j=0}^N\omega^R_j\tilde{{F}}^{\mathrm{EC},q,R}_{0j}-\sum_{j=0}^N\omega_j^L\tilde{{F}}^{\mathrm{EC},q,L}_{Nj},\label{ConCondbefore}\\ \dTotEnt^I&=\sum_{j=0}^N\omega^R_j\left(\sum\limits_{q=1}^M{V}_{0j}^{q,R}\tilde{{F}}^{\mathrm{EC},q,R}_{0j}-\tilde{\Psi}^{R,f}_{0j}\right)-\sum_{j=0}^N\omega^L_j\left(\sum\limits_{q=1}^M{V}_{Nj}^{q,L}\tilde{{F}}^{\mathrm{EC},q,L}_{Nj}-\tilde{\Psi}^{L,f}_{Nj}\right)\label{ICcondbefore}, \end{align} where the subscript $L$ and $R$ refer to the left and right element, respectively. Here, $\dTotCon^I$ and $\dTotEnt^I$ approximate the integral of $\bm{u}_t$ and $\Ent_t$ on a single interface. In order to derive a discretely conservative scheme, meaning that \eqref{BothConditionsMom} and \eqref{BothConditionsEntCon} hold, we need to derive numerical surface fluxes so that \begin{equation} \begin{split} \dTotCon^I&=0,\\ \dTotEnt^I&=0, \end{split} \end{equation} is satisfied.
Here, since we consider conforming interfaces, it is assumed that $\Delta y:=\Delta y_R=\Delta y_L$. Also, as we focus on a one dimensional interface (first component of $\tilde{{F}}^{\mathrm{EC},q}, {V}^{q,R}$ and $\tilde{\Psi}^{f}$ are fixed), we set \begin{equation} \begin{split} \tilde{\bm{F}}^{\mathrm{EC},q,L}&:=(\tilde{{F}}_{N_L 0}^{\mathrm{EC},q,L},\dots,\tilde{{F}}_{N_LN_L}^{\mathrm{EC},q,L})^T,\\ \tilde{\bm{F}}^{\mathrm{EC},q,R}&:=(\tilde{{F}}_{00}^{\mathrm{EC},q,R},\dots,\tilde{{F}}_{0N_R}^{\mathrm{EC},q,R})^T, \end{split} \end{equation} and the same for $\tilde{\bm\Psi}^{L,f},\tilde{\bm\Psi}^{R,f},\bm V^{q,L},\bm V^{q,R}$ respectively.
Furthermore, we introduce the notation of the discrete inner product to approximate the $L_2$ inner product. Assume we have two continuous functions $a(x),b(x)$ with their discrete evaluation $\bm A,\bm B$ on $[-1,1]$, then \begin{equation}\label{eq:disInProd} \ipC{\bm A}{\bm B} := \bm A^T\mat M \bm B\approx \int_{-1}^1a(x)b(x)~dx=:\ip{a}{L_2}{b}. \end{equation} Based on the inner product notation we can rewrite \eqref{ConCondbefore} and \eqref{ICcondbefore} by \begin{align} \dTotCon^I& = \ipR{\One^{R}}{\tilde{ \bm{F}}^{\mathrm{EC},q,R}}-\ipL{\One^{L}}{\tilde{\bm{F}}^{\mathrm{EC},q,L}},\label{ConCond}\\ \dTotEnt^I& = \sum\limits_{q=1}^M\ipR{ \bm{V}^{q,R}}{\tilde{ \bm{F}}^{\mathrm{EC},q,R}}-\ipR{\One^{R}}{\tilde{\bm{\Psi}}^{R,f}}-\sum\limits_{q=1}^M\ipL{ \bm{V}^{q,L}}{\tilde{\bm{F}}^{\mathrm{EC},q,L}}+\ipL{\One^{L}}{\tilde{\bm{\Psi}}^{L,f}}.\label{ICcond}, \end{align} where $\One^{L},\One^{R}$ are vectors of ones with size ${N_L+1}$ and $N_R+1$, respectively. The choice of the numerical flux depends on the nodal distribution in each element. Here, we differ between conforming and non-conforming nodal distributions, which is done in the next section.
\subsubsection{Conforming Nodal Distribution}
We first provide a brief overview on the entropy conservative properties of the conforming DGSEM \eqref{eq:ECDG}. This is straightforward in the conforming case and we use this discussion to introduce notation which is necessary for the non-conforming proofs presented later. That is, the nodal distributions in each element are identical and there are no hanging nodes in the mesh. For a conforming approximation it is possible to have a point-to-point transfer of solution information at interfaces because the mass matrix, polynomial order and numerical flux ``match'' \begin{equation} \label{ConfHelp} \begin{split} \mat M&:=\mat M_R=\mat M_L,\\ N&:=N_R=N_L,\\ \tilde{\bm{F}}^{\mathrm{EC},q}&:=\tilde{\bm{F}}^{\mathrm{EC},q,R}=\tilde{\bm{F}}^{\mathrm{EC},q,L}. \end{split} \end{equation} Primary and entropy conservation can be achieved by choosing an entropy conservative numerical flux function as shown by Fisher et al. \cite{Fisher2013}. We include the proof for completeness and recast it in our notation in Lemma \ref{Thm:Conf}.\\
\begin{lem}\label{Thm:Conf} Assume we have an entropy conservative numerical flux function, $\tilde{\bm{F}}^{\mathrm{EC}}$, that satisfies \eqref{eq:entCondition}, then the split form DGSEM is primary and entropy conservative for the DGSEM \eqref{eq:splitDG} by setting the numerical volume and surface fluxes to be $\tilde{\bm{F}}^{\#}=\tilde{\bm{F}}^{*}:=\tilde{\bm{F}}^{\mathrm{EC}}$. \end{lem} \begin{proof} Primary conservation can be shown easily by inserting \eqref{ConfHelp} in \eqref{ConCond}: \begin{equation} \dTotCon^I:=\ipC{\bm{1}}{\tilde{\bm{F}}^{\mathrm{EC},q}}-\ipC{\bm{1}}{\tilde{\bm{F}}^{\mathrm{EC},q}}=0. \end{equation} For entropy conservation we analyze \eqref{ICcond} \begin{equation} \label{ICcondpause} \dTotEnt^I = \sum\limits_{q=1}^M\ipC{\bm{V}^{q,R}}{\tilde{\bm{F}^{\mathrm{EC},q}}}- \sum\limits_{q=1}^M\ipC{\bm{V}^{q,L}}{\tilde{\bm{F}}^{\mathrm{EC},q}}-\ipC{\bm{1}}{\tilde{\bm{\Psi}}^{f,R}}+\ipC{\bm{1}}{\tilde{\bm{\Psi}}^{f,L}}. \end{equation} For the discrete inner product it holds \begin{equation} \ipC{\bm A}{\bm B}=\ipC{\bm{1}}{ \bm A\circ \bm B}, \end{equation} where $\circ$ denotes the Hadamard product for matrices. Then \eqref{ICcondpause} is rearranged to become \begin{align} \dTotEnt^I&= \ipC{\bm{1}}{\sum\limits_{q=1}^M\bm{V}^{q,R}\circ\tilde{\bm{F}}^{\mathrm{EC},q}}-\ipC{\bm{1}}{\sum\limits_{q=1}^M\bm{V}^{q,L}\circ\tilde{\bm{{F}}^{\mathrm{EC},q}}} -\ipC{\bm{1}}{\tilde{\bm{ \Psi}}^{f,R}}+\ipC{\bm{1}}{\tilde{\bm{\Psi}}^{f,L}}\notag\\ &=\ipC{\bm{1}}{\sum\limits_{q=1}^M(\bm{V}^{q,R}-\bm{V}^{q,L})\circ\tilde{\bm{F}}^{\mathrm{EC},q}-(\tilde{\bm\Psi}^{f,R}-\tilde{\bm\Psi}^{f,L})}\\ &=\ipC{\bm{1}}{\bm\Phi}\notag, \end{align} where $\bm\Phi:= \sum\limits_{q=1}^M(\bm{V}^{q,R}-\bm{V}^{q,L})\circ\tilde{\bm{F}}^{\mathrm{EC},q}-(\tilde{\bm\Psi}^{f,R}-\tilde{\bm\Psi}^{f,L})$. By analyzing a single component of $\bm\Phi$ we find \begin{equation} \Phi_i= \sum\limits_{q=1}^M\left({V}^{q,R}_{i0}- {V}^{q,L}_{iN}\right) \tilde{{F}}^{\mathrm{EC},q}(\bm{U}_{iN}^L,\bm{U}_{i0}^R)-( \tilde\Psi_{i0}^{f,R}- \tilde\Psi_{iN}^{f,L})=0, \end{equation} due to \eqref{eq:entCondition}. So \begin{equation} \dTotEnt^I=\ipC{\bm{1}}{\bm 0}=0, \end{equation} which leads to an entropy conservative nodal DG scheme. \end{proof} How to modify the entropy conservative numerical flux with dissipation to ensure that the scheme is entropy stable is described later in Sec. \ref{sec:Dissipation}. For now, we address the issue of $h/p$ refinement where non-conforming meshes may contain differing nodal distributions or hanging nodes. To do so, we consider the entropy conservative fluxes in a modified way. Namely, the projection procedure of the standard mortar method is augmented in the next sections to guarantee the entropic properties of the numerical approximation.
\subsubsection{Non-Conforming Nodal Distribution}\label{subsec: p}
In this section we focus on a discretization, where the nodes do not coincide ($p$-refinement), see Fig. \ref{fig:NonConfMesh}(a). As such, we introduce projection operators \begin{equation} \mat{P}_{L2R}\in\mathbb{R}^{(N_R+1)\times (N_L+1)},\quad \mat{P}_{R2L}\in\mathbb{R}^{(N_L+1)\times (N_R+1)}. \end{equation} In particular, the solution on either element is always moved to its neighbor where the entropy conservative numerical flux is computed. In a sense, this means we ``hide'' the mortar used to cement the two elements together in the non-conforming approximation. This presentation is motivated to simplify the discussion. The mortars are a useful analytical tool to describe the idea of a non-conforming DG method, but in a practical implementation they can be removed with a careful construction of the projection operators.
Here $\mat P_{L2R}$ denotes the projection from the left element to the right element, whereas $\mat P_{R2L}$ denotes the projection from the right element to the left. In the approximation we have two solution polynomials $\bm{\mathfrak{p}}_L$ and $\bm{\mathfrak{p}}_R$ evaluated at the corresponding interfaces of each element. The numerical approximation is primary and entropy conservative provided both \eqref{ConCond} and \eqref{ICcond} are zero. However, the subtractions involve two discrete inner products with differing polynomial order between the left and right elements. Therefore, we require projection operators that move information from the left node distribution to the right and vice versa. As such, in discrete inner product notation, the projections must satisfy \cite{Mattsson2010b} \begin{equation}\label{ProjMot} \ipL{\bm{\mathfrak{p}}_L}{\mat{P}_{R2L}\bm{\mathfrak{p}}_R}=\ipR{\mat{P}_{L2R}\bm{\mathfrak{p}}_L}{\bm{\mathfrak{p}}_R}\; \Leftrightarrow\; \bm{\mathfrak{p}}_L^T\mat M_L\mat{P}_{R2L}\bm{\mathfrak{p}}_R=\bm{\mathfrak{p}}_L^T\mat{P}_{L2R}^T\mat M_R\bm{\mathfrak{p}}_R. \end{equation} As the polynomials in \eqref{ProjMot} are arbitrary, we set the projection operators to be \textit{$\mat M$-compatible}, meaning \begin{equation}\label{7} \begin{split} \mat{P}_{R2L}^T\mat M_L&=\mat M_R\mat{P}_{L2R}, \end{split} \end{equation} which is the same constraint considered in \cite{Carpenter2016,Friedrich2016,Kozdon2016,Mattsson2010b,Parsani2016}. Non-conforming methods with DG operators have been derived by Kopriva \cite{Kopriva2002} on LGL nodes, which imply a diagonal SBP norm. The construction of the projection operators is motivated by a discrete $L_2$ projection over Lagrange polynomials and can be found in Appendix \ref{sec:App B}.
The conditions for primary conservation \eqref{ConCond} and entropy conservation \eqref{ICcond} can be directly adapted from the conforming case. Before proving total conservation, we first introduce the operator $\mathbb{E}$ to simplify the upcoming proof of Theorem \ref{thm: p} and to make it more compact. The operator $\mathbb{E}$ extracts the diagonal of a matrix: \begin{equation}\label{eq:EE} \mathbb{E}\begin{pmatrix} a_{11} & \hdots & \hdots & a_{1n} \\ \vdots & \ddots & & \vdots \\ \vdots & & \ddots & \vdots \\ a_{n1} & \hdots & \hdots & a_{nn} \end{pmatrix}= \begin{pmatrix} a_{11}\\ a_{22} \\ \vdots \\ a_{n-1,n-1} \\ a_{nn} \end{pmatrix}, \end{equation} and has the following property. \\
\begin{lem}\label{Lemma}
Given a vector $\bm a\in\mathbb{R}^{N_L+1}$, a diagonal matrix $\mat A=\diag(\bm a)\in\mathbb{R}^{(N_L+1)\times(N_L+1)}$ and a dense rectangular matrix $\mat{B}\in\mathbb{R}^{(N_L+1)\times(N_R+1)}$, then \begin{equation} \ipL{\bm a}{\mathbb{E}(\mat P_{R2L}\mat B^T)}=\ipR{\One^{R}}{\mathbb{E}(\mat P_{L2R}\mat A\mat B)}. \end{equation} \end{lem} \begin{proof} \begin{equation} \begin{split} \ipL{\bm a}{\mathbb{E}(\mat P_{R2L}\mat B^T)}&=\left(\!\One^{L}\right)^{\!T}\!\mat A \mat M_L \mathbb{E}( \mat P_{R2L}\mat B^T)=\left(\!\One^{L}\right)^{\!T}\! \mathbb{E}(\mat A \underbrace{\mat M_L \mat P_{R2L}}_{ \stackrel{\eqref{7}}{=}\mat P_{L2R}^T \mat M_R}\mat B^T),\\ &=\left(\!\One^{L}\right)^{\!T}\! \mathbb{E}(\underbrace{\mat A \mat P_{L2R}^T }_{=:\tilde{\mat A}}\underbrace{\mat M_R\mat B^T}_{=:\tilde{\mat B}})=\left(\!\One^{L}\right)^{\!T}\! \mathbb{E}(\tilde{\mat A}\tilde{\mat B}), \end{split} \end{equation} since $\mat A$ and the norm matrix $\mat M_L$ are diagonal matrices they are free to move inside the extraction operator \eqref{eq:EE} and $\tilde{\mat A}\in\mathbb{R}^{(N_L+1)\times(N_R+1)}, \tilde{\mat B}\in\mathbb{R}^{(N_R+1)\times(N_L+1)}$. Note, that \begin{equation*} \left(\!\One^{L}\right)^{\!T}\! \mathbb{E}(\tilde{\mat A}\mat{ \tilde B})=\sum_{i=0}^{N_L}1\sum_{j=0}^{N_R} \tilde A_{ij} \tilde B_{ji}=\sum_{j=0}^{N_R}1\sum_{i=0}^{N_L} \tilde B_{ji} \tilde A_{ij}=\left(\!\One^{R}\right)^{\!T}\!\mathbb{E}(\tilde{\mat B}\mat{ \tilde A}). \end{equation*} By replacing $\tilde{\mat A}, \tilde{\mat B}$ we get \begin{equation} \begin{split} \ipL{\bm a}{\mathbb{E}(\mat P_{R2L}\mat B^T)}&=\left(\!\One^{R}\right)^{\!T}\!\mathbb{E}(\mat M_R\mat B^T\mat A \mat P_{L2R}^T)=\left(\!\One^{R}\right)^{\!T}\!\mat M_R\mathbb{E}(\mat P_{L2R}\mat A\mat B),\\ &=\ipR{\One^{R}}{\mathbb{E}(\mat P_{L2R}\mat A\mat B)}, \end{split} \end{equation} because $\mathbb{E}(\mat W)=\mathbb{E}(\mat W^T)$ for any square matrices $\mat W$. \end{proof}
Furthermore, we introduce the following matrices \begin{equation}\label{def} \begin{split} [\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}]_{ij}&=\tilde{{F}}^{\mathrm{EC},q}(\bm{U}^L_{iN_L},\bm{U}^R_{j0}),\\ [\tilde{\mat\Psi}^{f,L}]_{ij}&=\tilde\Psi^f(\bm{U}^L_{iN_L}), \\ [\tilde{\mat\Psi}^{f,R}]_{ij}&=\tilde\Psi^f(\bm{U}^R_{j0}), \end{split} \end{equation} for $i=0,\dots,N_L$, $j=0,\dots,N_R$, where $N_L$ and $N_R$ denote the number of nodes in one dimension in left and right element and $q=1,\dots,M$. Here, $\tilde{{F}}^{\mathrm{EC},q}$ denotes a flux satisfying \eqref{eq:entCondition}. We note that the matrices containing the entropy flux potential are constant along rows or columns respectively and that for the non-conforming case $N_L\neq N_R$, so all matrices in \eqref{def} are rectangular. \\ With the operator $\mathbb{E}$, Lemma \ref{Lemma} and \eqref{def}, it is possible to construct an entropy conservative scheme for non-conforming, non-linear problems.
\begin{thm} \label{thm: p} Assume we have an entropy conservative numerical flux $\tilde{\bm{F}}^{\mathrm{EC}}$ from a conforming discretization satisfying the condition \eqref{eq:entCondition}. The fluxes \begin{align}
\tilde{{F}}^{\mathrm{EC},q,L}_{i}&:=\sum_{j=0}^{N_R}\left[\mat{P}_{R2L}\right]_{ij}\left[\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^T\right]_{ji},\quad i = 0,\ldots,N_L,\\
\tilde{{F}}^{\mathrm{EC},q,R}_{j}&:=\sum_{i=0}^{N_L}\left[\mat{P}_{L2R}\right]_{ji}[\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}]_{ij},\quad j = 0,\ldots,N_R, \end{align} or in a more compact matrix-vector notation \begin{align}
\tilde{\bm{F}}^{\mathrm{EC},q,L}&:=\mathbb{E}\left(\mat{P}_{R2L}\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right),\label{8.1}\\ \tilde {\bm{F}}^{\mathrm{EC},q,R}&:=\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right),\label{8.2} \end{align} are primary and entropy conservative for non-conforming nodal distributions. \end{thm} \begin{proof} First, we prove primary conservation by including \eqref{8.1} and \eqref{8.2} in \eqref{ConCond} \begin{equation}\label{eq:IUT} \begin{split} \dTotCon^I:=&\ipR{\One^{R}}{\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}-\ipL{\One^{L}}{\mathbb{E}\left( \mat{P}_{R2L}\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right)}.\\ \end{split} \end{equation} We apply the result of Lemma \ref{Lemma} to the last term of \eqref{eq:IUT} with $\bm a = \One^{L}$ and $\mat B = \tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}$ to get the conservation for the primary variables \begin{equation} \dTotCon^I=\ipR{\One^{R}}{\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}-\ipR{\One^{R}}{\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}=0. \end{equation} Next, we show that the discretization is entropy conservative. To do so, we substitue the fluxes \eqref{8.1} and \eqref{8.2} in \eqref{ICcond}. \begin{equation}\label{ThreeParts} \begin{aligned} \dTotEnt^I:=&\sum\limits_{q=1}^M \ipR{\bm{V}^{\eqInd,R}}{\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}-\ipR{\One^{R}}{\tilde{\bm{\Psi}}^{f,R}}\\ &-\sum\limits_{q=1}^M\ipL{\bm{V}^{\eqInd,L}}{\mathbb{E}\left( \mat{P}_{R2L}\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right)} + \ipL{\One^{L}}{\tilde{\bm{\Psi}}^{f,L}}. \end{aligned} \end{equation} We divide \eqref{ThreeParts} into three pieces to simplify the analysis. \begin{equation}\label{final} \begin{aligned} \dTotEnt^I=&\underbrace{\sum_{q=1}^M\ipR{\bm{V}^{\eqInd,R}}{\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}}_{=(I)}-\underbrace{\sum\limits_{q=1}^M\ipL{\bm{V}^{\eqInd,L}}{\mathbb{E}\left( \mat{P}_{R2L}\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right)}}_{=(II)}\\ &-\left(\underbrace{\ipR{\One^{R}}{\tilde{\bm{\Psi}}^{f,R}} - \ipL{\One^{L}}{\tilde{\bm{\Psi}}^{f,L}}}_{=(III)}\right). \end{aligned} \end{equation} For $(I)$ we see that \begin{align} (I)= \sum\limits_{q=1}^M\ipR{\bm{V}^{\eqInd,R}}{\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}= \sum\limits_{q=1}^M\ipR{\One^{R}}{\bm{V}^{q,R}\circ\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)}. \end{align} By introducing $\mat{V}^{q,R}:=\mathrm{diag}(\bm V^{q,R})$ we can shift the entropy variables inside the $\mathbb{E}$ operator and obtain \begin{align} (I) = \sum\limits_{q=1}^M\ipR{\One^{R}}{\mathbb{E}\left(\mat{V}^{q,R}\mat{P}_{L2R}\tilde{\mat{F}}^{\mathrm{EC},q}_{L,R}\right)}= \ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\left(\sum\limits_{q=1}^M\mat{V}^{q,R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)\right)}, \end{align} because $\mathbb{E}(\mat A\mat B)=\mathbb{E}(\mat B\mat A)$ for square matrices $\mat A,\mat B$.
Considering the second term (II) of \eqref{final} \begin{align} (II)=& \sum\limits_{q=1}^M\ipL{\bm{V}^{\eqInd,L}}{\mathbb{E}\left( \mat{P}_{R2L}\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right)}, \end{align} and applying Lemma \ref{Lemma} with $\bm a = \bm{V}^{\eqInd,L}, \mat A = \mat{V}^{q,L}$, and $\mat B = \tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}$ gives \begin{align} (II)=\sum\limits_{q=1}^M\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\mat{V}^{q,L}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd} \right)}=\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\left(\sum\limits_{q=1}^M\mat{V}^{q,L}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right) \right)}. \end{align}
Last, we analyze term $(III)$, \begin{equation}\label{(III)} (III)=\ipR{\One^{R}}{\tilde{\bm{\Psi}}^{f,R}}-\ipL{\One^{L}}{\tilde{\bm{\Psi}}^{f,L}}. \end{equation} For this analysis we rewrite $(III)$ in terms of the matrices $\tilde{\mat{\Psi}}^{f,L}, \tilde{\mat{\Psi}}^{f,R}$. Note, that each column of $\tilde{\mat{\Psi}}^{f,R}$ and each row of $\tilde{\mat{\Psi}}^{f,L}$ remain constant. The projection operators are exact for a constant state, i.e. $\mat{P}_{R2L}\One^{R}=\One^{L}$ and $\mat{P}_{L2R}\One^{L}=\One^{R}$. Hence, we define \begin{align} \tilde{\bm{\Psi}}^{f,R}&=\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{\Psi}}^{f,R}\right),\\ \tilde{\bm{\Psi}}^{f,L}&=\mathbb{E}\left( \mat{P}_{R2L}\left(\tilde{\mat{\Psi}}^{f,L}\right)^{\!T}\right). \end{align} Substituting the above definitions in \eqref{(III)} we arrive at \begin{align} (III)=\ipR{\One^{R}}{\mathbb{E}\left( \mat{P}_{L2R}\tilde{\mat{\Psi}}^{f,R}\right)}-\ipL{\One^{L}}{\mathbb{E}\left( \mat{P}_{R2L}\left(\tilde{\mat{\Psi}}^{f,L}\right)^{\!T}\right)}. \end{align} Again applying Lemma \ref{Lemma} (where $\bm a=\One^{L}, \mat B = \tilde{\mat{\Psi}}^{f,L}$) yields \begin{align} (III)=&\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{\Psi}}^{f,R} \right)}-\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{\Psi}}^{f,L} \right)},\\ =&\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\left(\tilde{\mat{\Psi}}^{f,R}- \tilde{\mat{\Psi}}^{f,L} \right)\right)}. \end{align} Substituting $(I),(II),(III)$ in \eqref{final} we have rewritten the entropy update to be \begin{equation}\label{final1} \begin{split} \dTotEnt^I=&\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\left(\sum\limits_{q=1}^M\mat{V}^{q,R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}-\sum\limits_{q=1}^M\mat{V}^{q,L}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}-\left(\tilde{\mat{\Psi}}^{f,R}- \tilde{\mat{\Psi}}^{f,L}\right)\right)\right)},\\ =&\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\mat{ \tilde S} \right)}, \end{split} \end{equation} with \begin{equation} \tilde{\mat S}:=\sum\limits_{q=1}^M\mat{V}^{q,R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}-\sum\limits_{q=1}^M\mat{V}^{q,L}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}-\left(\tilde{\mat{\Psi}}^{f,R}- \tilde{\mat{\Psi}}^{f,L}\right). \end{equation} Next, we analyze a single component of $\mat{ \tilde S}$. Let $i=0,\dots N_L$ and $j=0,\dots N_R$, then \begin{equation}\label{almostthere} [\mat{ \tilde S}]_{ij}=\sum\limits_{q=1}^M\left(V^{q,R}_{j0}- V^{q,L}_{iN_L}\right)\tilde{{F}}^{\mathrm{EC},q}(\bm{U}_{iN_L}^L,\bm{U}_{j0}^R)-(\tilde{\Psi}_{j0}^{f,R}-\tilde{\Psi}_{iN_L}^{f,L}). \end{equation} Since the entropy conservative fluxes is contained in \eqref{almostthere} and due to \eqref{eq:entCondition} we obtain \begin{equation} [\mat{ \tilde S}]_{ij}=0. \end{equation} Inserting this result in \eqref{final1} we arrive at \begin{equation} \begin{split} \dTotEnt^I=&\ipR{\One^{R}}{\mathbb{E}\left(\mat{P}_{L2R}\mat 0 \right)}=0. \end{split} \end{equation} Therefore, $\dTotEnt^I$ is zero for $ \tilde{{F}}^{\mathrm{EC},q,L}:=\mathbb{E}\left(\mat{P}_{R2L}\left(\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right)$ and $ \tilde{{F}}^{\mathrm{EC},q,R}:=\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right)$.
\end{proof}
Note, that this proof is for general for any hyperbolic PDE with physical fluxes $f,g$ where we have an entropy. Based on this proof, we can construct entropy conservative schemes with algebraic non-conforming discretizations ($p$ refinement). To introduce additional flexibility, we next consider geometric non-conforming discretizations where the interfaces may not coincide ($h$ refinement).
\subsection{Non-Conforming Interfaces with Hanging Nodes}\label{sec:h}
In Sec. \ref{subsec: p} we created numerical fluxes for elements with a coinciding interface but differing polynomial orders. As such, each numerical interface flux only depends on one neighboring element. For example the numerical flux $\tilde{\bm{F}}^{\mathrm{EC},R}$ in \eqref{8.2} only contained the projection operator $\mat P_{L2R}$, so it only depended on one neighboring element $L$. This was acceptable if the interfaces had no hanging nodes, however for the more general case of $h$ refinement as in Fig. \ref{hangingnodes} the interface coupling requires addressing contributions from many elements. \begin{figure}
\caption{$h$ refinement with hanging nodes X}
\label{hangingnodes}
\end{figure}
Throughout this section we will focus on discrete meshes as in Fig. \ref{hangingnodes}. For the $h$ refinement analysis we adapt the results derived in the previous section, where the interfaces coincide. Therefore, we consider all left elements as if they are \emph{\textit{one}} element $L=\bigcup_{i=1}^EL_i$. Again, this procedure ``hides'' the mortars within the new projection operators. Thus, we see that each sub-element $L$ has a conforming interface with element $R$ (red line) and has the nodes of the elements $L_i$ on the red lined interface \begin{equation} \eta^L=(\eta^{L_1}_0,\dots,\eta^{L_1}_{N_{L_1}},\dots,\eta^{L_E}_0,\dots,\eta^{L_E}_{N_{L_E}})^T, \end{equation} where $\eta^{L_i}$ denotes the vertical nodes of the element $L_i$. For element $L$ and element $R$ the projection operators need to satisfy the $\mat M$-compatibility condition \eqref{7}: \begin{equation}\label{hproj} \mat{P}_{L2R}^T\mat M_R=\mat M_L\mat{P}_{R2L}, \end{equation} where \begin{equation} \mat M_L=\frac{1}{\Delta_R}\begin{pmatrix} \Delta_{L_1}\mat M_{L_1} & & \\ & \ddots & \\ & & \Delta_{L_E}\mat M_{L_E} \end{pmatrix}, \end{equation} where $\Delta$ denotes the height of an element.
We can interpret the ``large'' projection operators into parts that contribute from/to each of the smaller elements with the following structure
\begin{equation}\label{construct} \mat P_{L2R}=\begin{bmatrix} \mat P_{L_12R}&\dots & \mat P_{L_E2R} \end{bmatrix}, \end{equation} and
\begin{equation} \mat P_{R2L}=\begin{bmatrix} \mat P_{R2L_1}\\ \vdots \\ \mat P_{R2L_E} \end{bmatrix}. \end{equation} With this new notation we adapt the $\mat M$-compatibility condition \eqref{hproj} to become \begin{equation}\label{hproj2} \Delta_{L_i} \mat{P}_{R2L_i}^T\mat M_{L_i}=\Delta_R\mat M_R\mat{P}_{L_i2R},\quad i = 1,\ldots,E. \end{equation}
As in Sec. \ref{sec:p} we choose the numerical surface fluxes so that the scheme is primary and entropy conservative. We note that for the $h$ non-conforming case (just like $p$ non-conforming) the result of Lemma \ref{thm1} is still valid. That is, the volume contributions have no effect on the non-conforming approximation. Only a careful definition of the interface coupling is needed to construct an entropy stable non-conforming DG approximation. Therefore, we analyze all terms which are related to the interface connecting $L_1,\dots,L_E$ and $R$. Similar to \eqref{ConCond} and \eqref{ICcond}, we arrive at the following terms \begin{align} \dTotCon^I=&\ipR{\One^{R}}{\tilde{\bm{F}}^{\mathrm{EC},q,R}}-\sum_{i=1}^E\ipLi{\One^{L_i}}{\tilde{\bm{F}}^{\mathrm{EC},q,L_i}},\label{ConCond2}\\ \dTotEnt^I=&\sum\limits_{q=1}^M\ipR{\bm{V}^{\eqInd,R}}{\tilde{\bm{F}}^{\mathrm{EC},q,R}}-\ipR{\One^{R}}{\tilde{\mat{\Psi}}^{R,f}}\notag-\sum_{i=1}^E\left(\sum\limits_{q=1}^M\ipLi{\bm{V}^{q,L_i}}{\tilde{\bm{F}}^{\mathrm{EC},q,L_i}}-\ipLi{\One^{L_i}}{\tilde{\mat{\Psi}}^{L_i,f}}\right),\label{ICcond2} \end{align} which need to be zero to obtain a discretely primary and entropy conservative scheme.\\
\begin{cor}\label{cor}
Given a set of projection operators that satisfy \eqref{hproj2}, we can prove analogously to Theorem \ref{thm: p} that the fluxes \begin{equation}
\tilde{{F}}^{\mathrm{EC},q,R}:=\mathbb{E}\left(\mat{P}_{L2R}\tilde{\mat{F}}_{L,R}^{\mathrm{EC},\eqInd}\right) =\sum_{i=1}^E\mathbb{E}\left(\mat{P}_{L_i2R}\tilde{\mat{F}}_{L_i,R}^{\mathrm{EC},\eqInd}\right)\label{ECfluxR}, \end{equation} and \begin{equation}
\tilde{{F}}^{\mathrm{EC},q,L_i}:=\mathbb{E}\left(\mat{P}_{R2L_i}\left(\tilde{\mat{F}}_{L_i,R}^{\mathrm{EC},\eqInd}\right)^{\!T}\right)\label{ECfluxL}, \end{equation} for $i=1,\dots,E$ lead to primary and entropy conservative schemes. \end{cor} \begin{proof} This result requires a straightforward modification of the result from Lemma \ref{Lemma} \begin{equation} \Delta_{L_i}\ipL{\bm a}{\mathbb{E}(\mat P_{R2L_i}\mat B^T)}=\Delta_R\ipR{\One^{R}}{\mathbb{E}(\mat P_{L_i2R}\mat A\mat B)}. \end{equation} Now, the proof follows identical steps as given for Lemma \ref{thm1}, but now keeping track of the adjustable element sizes. \end{proof}
\subsection{Including Dissipation within the Numerical Surface Flux}\label{sec:Dissipation}
In Sec. \ref{sec:p} and Sec. \ref{sec:h} we derived primary and entropy conservative schemes for non-linear problems on non-conforming meshes with $h/p$ refinement. From these results, we can include interface dissipation and arrive at an entropy stable discretization for an arbitrary non-conforming rectangular mesh.
While conservation laws are entropy conservative for smooth solutions, discontinuities in the form of shocks can develop in finite time for non-linear problems despite smooth initial data. Considering shocks, the mathematical entropy should decay, which needs to be reflected within our numerical scheme. Thus, we will describe how to include interface dissipation which leads to an entropy stable scheme. We note that the numerical volume flux in \eqref{eq:ECDG} is still an entropy conservative flux which satisfies \eqref{eq:entCondition}.
We will focus on the general case, where we have differing nodal distributions as well as hanging nodes ($h$ refinement) as in Fig. \ref{hangingnodes}. As in Sec. \ref{sec:h} we assume that the projection operators satisfy the compatibility condition \eqref{hproj2}.\\
\begin{thm}\label{thm: diss} The scheme is primary conservative and entropy stable, for the following numerical surface fluxes. \begin{equation}\label{ESfluxL} \tilde{\bm{F}}^{\mathrm{ES},q,L_i}= \tilde{\bm{F}}^{\mathrm{EC},q,L_i}-\frac{\lambda}{2}\left(\mat P_{R2L_i} \bm{V}^{q,R}- \bm{V}^{q,L_i} \right) \end{equation} \begin{equation}\label{ESfluxR} \tilde{\bm{F}}^{\mathrm{ES},q,R}= \tilde{\bm{F}}^{\mathrm{EC},q,R}-\frac{\lambda}{2}\sum\limits_{i=1}^E\mat P_{L_i2R}\left(\mat P_{R2L_i} \bm{V}^{q,R} - \bm{V}^{q,L_i}\right) \end{equation} where $\lambda>0$ is a scalar which controls the dissipation rate. \end{thm}
\begin{proof} By including dissipation we can prove primary conservation by substituting the new fluxes \eqref{ESfluxL} and \eqref{ESfluxR} into \eqref{ConCond2} \begin{equation} \resizebox{\hsize}{!}{$ \dTotCon^I=\ipR{\One^{R}}{\tilde{\bm{F}}^{\mathrm{EC},q,R}-\frac{\lambda}{2}\sum\limits_{i=1}^E\mat P_{L_i2R}\left(\mat P_{R2L_i}\bm{V}^{q,R} - \bm{V}^{q,L_i}\right)}-\sum\limits_{i=1}^E\ipLi{\One^{L_i}}{\tilde{ \bm{F}}^{\mathrm{EC},q,L_i}-\frac{\lambda}{2}\left(\mat P_{R2L_i} \bm{V}^{q,R}- \bm{V}^{q,L_i} \right)}.$} \end{equation} Due to Corollary \ref{cor} we know that \begin{equation} \ipR{\One^{R}}{\tilde{{F}}^{\mathrm{EC},q,R}}-\sum_{i=1}^E\ipLi{\One^{L_i}}{\tilde{{F}}^{\mathrm{EC},q,L_i}}=0, \end{equation} and we find that \begin{equation} \dTotCon^I=-\frac{\lambda\Delta_R}{4}\sum_{i=1}^E\left(\!\One^{R}\right)^{\!T}\!\mat M_R\mat P_{L_i2R}\left(\mat P_{R2L_i} \bm{V}^{q,R} - \bm{V}^{q,L_i}\right)+\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4}\left(\!\One^{L_i}\right)^{\!T}\!\mat M_{L_i}\left(\mat P_{R2L_i} \bm{V}^{q,R}- \bm{V}^{q,L_i} \right). \end{equation} Due to \eqref{hproj2} we arrive at \begin{equation} \dTotCon^I=-\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4}\left(\!\One^{R}\right)^{\!T}\!\mat P_{R2L_i}^T\mat M_{L_i}\left(\mat P_{R2L_i}\bm{V}^{q,R}- \bm{V}^{q,L_i}\right)+\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4}\left(\!\One^{L_i}\right)^{\!T}\!\mat M_{L_i}\left(\mat P_{R2L_i}\bm{V}^{q,R}- \bm{V}^{q,L_i} \right), \end{equation} assuming that $\mat P_{R2L_i}$ can project a constant exactly, meaning $\mat P_{R2L_i}\One^{R}=\One^{L_i}$, it yields \begin{equation} \dTotCon^I=0, \end{equation} which leads to a primary conservative scheme.
To prove entropy stability we include \eqref{ESfluxL} and \eqref{ESfluxR} in \eqref{ICcond} and adapt the results from Corollary \ref{cor} to find that \begin{equation} \label{EntDiss} \begin{split} \dTotEnt^I=&-\sum\limits_{q=1}^M\frac{\Delta_R}{2}\ipR{\bm{V}^{q,R}}{\frac{\lambda}{2}\sum_{i=1}^E\mat P_{L_i2R}\left(\mat P_{R2L_i}\bm{V}^{q,R}- \bm{V}^{q,L_i}\right)}\\ &+\sum\limits_{q=1}^M\sum_{i=1}^E\frac{\Delta_{L_i}}{2}\ipLi{ \bm{V}^{q,L_i}}{\frac{\lambda}{2}\left(\mat P_{R2L_i}\bm{V}^{q,R}- \bm{V}^{q,L_i} \right)},\\ =&-\frac{\lambda\Delta_R}{4}\sum\limits_{q=1}^M\sum_{i=1}^E \left(\bm{V}^{\eqInd,R}\right)^{\!T}\!\mat M_R\mat P_{L_i2R}\left(\mat P_{R2L_i}\bm{V}^{q,R}- \bm{V}^{q,L_i}\right)\\ &+\sum\limits_{q=1}^M\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4} \left(\bm{V}^{q,L_i}\right)^{\!T}\!\mat M_{L_i}\left(\mat P_{R2L_i}\bm{V}^{q,R}- \bm{V}^{q,L_i}\right). \end{split} \end{equation} Again, we apply the condition \eqref{hproj2} and obtain \begin{equation} \begin{split} \dTotEnt^I=&-\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4} \left(\bm{V}^{\eqInd,R}\right)^{\!T}\!\mat P_{R2L_i}^T\mat M_{L_i}\left(\mat P_{R2L_i} \bm{V}^{q,R} - \bm{V}^{q,L_i}\right)\\ &+\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4} \left(\bm{V}^{q,L_i}\right)^{\!T}\!\mat M_{L_i}\left(\mat P_{R2L_i} \bm{V}^{q,R}- \bm{V}^{q,L_i} \right),\\ =&-\sum_{i=1}^E\frac{\lambda\Delta_{L_i}}{4}\left(\mat P_{R2L_i} \bm{V}^{q,R}- \bm{V}^{q,L_i}\right)^T\!\mat M_{L_i}\left(\mat P_{R2L_i} \bm{V}^{q,R} - \bm{V}^{q,L_i}\right)\le 0, \end{split} \end{equation} since each $\mat M_{L_i}$ is a symmetric positive definite matrix and $\lambda\Delta_{L_i}>0$, so the non-conforming DG scheme is entropy stable. \end{proof} Note, that this proof also holds for deriving an entropy stable scheme for geometrically conforming interfaces but differing polynomial order ($p$ refinement) by setting $E=1$.
To summarize, we derived a primary conservative and entropy stable DGSEM for non-linear problems on general $h/p$ non-conforming meshes. Note, that all results hold for an arbitrary system of non-linear conservation laws as long as entropy conservative numerical fluxes exist that satisfy \eqref{eq:entCondition}.
\section{Numerical results}\label{sec:numResults}
For all numerical results presented in this work we considered the two dimensional Euler equations \begin{equation} \begin{pmatrix} \rho\\ \rho u\\ \rho v\\ E \end{pmatrix}_t +\begin{pmatrix} \rho u\\ \rho u^2+p\\ \rho u v\\ u(E+p) \end{pmatrix}_x +\begin{pmatrix} \rho v\\ \rho u v\\ \rho v^2+p\\ v(E+p) \end{pmatrix}_y =\begin{pmatrix} 0\\ 0\\ 0\\ 0 \end{pmatrix}, \end{equation} on $\Omega\subset\mathbb{R}^2$ and $t\in[0,T]\subset\mathbb{R}^+$ with $E=\frac{1}{2}\rho(u^2+v^2)+\frac{p}{\gamma-1}$ and adiabatic coefficient $\gamma=1.4$.
The entropy conservative/stable non-conforming implementation of the DGSEM of the Euler equations uses the Ismail and Roe entropy conserving flux \cite{Ismail2009} in \eqref{8.1} and \eqref{8.2} for $p$ refinement and in \eqref{ECfluxR} and \eqref{ECfluxL} to apply $h$ refinement.
We use an explicit time integration method to advance the approximate solution. In particular, we select the five-stage, fourth-order low-storage Runge-Kutta method of Carpenter and Kennedy \cite{Kennedy1994}. The explicit time step $\Delta t$ is selected by the CFL condition \cite{Friedrich2016} \begin{equation} \Delta t:=CFL\frac{\min_i\{\frac{\Delta x_i}{2}\frac{\Delta y_i}{2}\}}{\max_j\{N_j+1\}\lambda_{\mathrm{max}}}, \end{equation} where $\Delta x_i$ and $\Delta y_i$ denote the width in $x$- and $y$-direction of the $i^{\mathrm{th}}$ element, $N_j$ denotes the number of nodes in one dimension of the $j^{\mathrm{th}}$ element, and $\lambda_{\mathrm{max}}$ denotes the maximum eigenvalue of the flux Jacobians over the whole domain.
In this section, we verify the experimental order of convergence as well as conservation of the primary quantities and entropy for the novel $h/p$ non-conforming DGSEM described in this work.
\subsection{Experimental Order of Convergence}
For our numerical convergence experiments, we set $T=1$ and $CFL=0.2$. We analyze the experimental order of convergence for an entropy stable flux. Therefore, we include dissipation to the baseline entropy conservative Ismail and Roe flux \cite{Ismail2009} at each element interface. In particular, we consider a local Lax-Friedrichs type dissipation term with $\bm z =n_1\bm u+n_2\bm v$, where $\bm n=(n_1,n_2)^T$ denotes the normal vector \begin{equation} \begin{aligned}
\lambda_L&=\max\left\{||\bm z_L+\bm c_L||_\infty, ||\bm z_L||_\infty,|| \bm z_L-\bm c_L||_\infty\right\},\\
\lambda_R&=\max\left\{||\bm z_R+\bm c_R||_\infty, ||\bm z_R||_\infty,|| \bm z_R-\bm c_R||_\infty\right\},\\ \lambda&=\frac{1}{2}\max\left\{\lambda_L,\lambda_R\right\}, \end{aligned} \end{equation} with $c=\sqrt{\frac{\gamma p}{\rho}}$. By including $\lambda$ in \eqref{ESfluxL} and \eqref{ESfluxR}.
For convergence studies we consider the isentropic vortex advection problem taken from \cite{Chen2016}. Here, we set the domain to be $\Omega=[0,10]\times[0,10]$. The initial conditions are \begin{equation} \bm{w}(x,y,0) \equiv \bm{w}_0(x,y) = \begin{pmatrix} T^{\frac{1}{\gamma-1}}\\[0.1cm] 1-(y-5)\phi(r)\\[0.1cm] 1+(x-5)\phi(r)\\[0.1cm] T^{\frac{\gamma}{\gamma-1}}\\[0.1cm] \end{pmatrix}, \end{equation} where we introduce the vector of primitive variables $\bm{w}(x,y,t) = (\rho,u,v,p)^T$ and \begin{equation} \begin{split} r(x,y)=&\sqrt{(x-5)^2+(y-5)^2},\\[0.1cm] T(x,y)=&1-\frac{\gamma-1}{2\gamma}\phi(r^2),\\[0.1cm] \phi(r)=&\varepsilon e^{\alpha(1-r^2)},\\[0.1cm] \end{split} \end{equation} with $\varepsilon=\frac{5}{2\pi}$ and $\alpha=0.5$. With these initial condition the vortex is advected along the diagonal of the domain. We impose Dirichlet boundary conditions using the exact solution which is easily determined to be \begin{equation} \bm{w}(x,y,t) = \bm{w}_0(x-t,y-t). \end{equation}
To examine the convergence order for a $h/p$ non-conforming method we consider a general mesh setup that includes pure $p$ non-conforming interfaces, pure $h$ non-conforming interfaces and $h/p$ non-conforming interfaces. Therefore we define three element types $A, B, C$. Here, the mesh is prescribed in the following way \begin{center} \begin{itemize} \item Elements of type $A$ in $\Omega_1=[0,5]\times[0,10]$\\ \item Elements of type $B$ in $\Omega_1=[5,10]\times[0,5]$\\ \item Elements of type $C$ in $\Omega_1=[5,10]\times[5,10]$. \end{itemize} \end{center} For each level of the convergence analysis, a single element is divided into four sub-elements. This mesh refinement strategy is sketched in Fig. \ref{mesh3}.
\begin{figure}
\caption{Three levels of mesh refinement used to investigate the experimental order of convergence for the $h/p$ non-conforming DG approximation.}
\label{mesh3}
\end{figure} The DG derivative matrix (i.e. the SBP operator) depends on the polynomial degree within each element. Therefore, in the case of $p$ refinement the SBP operator may differ between elements $A,B,C$.
We consider the DGSEM on Legendre-Gauss-Lobatto nodes as in \cite{Gassner2013}. To do so, we investigate the following configurations: \begin{itemize} \item Element $A$ with a degree $p_A=p$ operator in $x$- and $y$-direction\\ \item Element $B$ with a degree $p_B=p+1$ operator in $x$- and $y$-direction\\ \item Element $C$ with a degree $p_C=p$ operator in $x$- and $y$-direction, \end{itemize} with $p=2,3$.
With such element distributions we consider $p$ refinement along the line $x=5$ for $y\in[5,10]$ and $h$ refinement along the line $y=5$ for $x\in[0,10]$. To carefully treat the non-conforming interfaces we create the projection operators described in Appendix \ref{sec:App B}.
With these operators included in the non-conforming entropy stable scheme we obtain the experimental order of convergence (EOC) rates collected in Tables \ref{DGMix2} and \ref{DGMix3}. \begin{table}[ht] \begin{center} \textbf{DG operators with mixed polynomial degree}\\
\begin{minipage}{0.4\textwidth} \begin{center}
\begin{tabular}{c|c|c} \hline DOFS & $L_2$ & EOC\\ \hline 544 &1.90E-01&\\ 2176 &3.06E-02&2.6\\ 8704 &4.28E-03&2.8\\ 34826 &8.44E-04&2.3\\ 139264&1.80E-04&2.2\\ \hline \end{tabular}\\[0.1cm] \caption{Experimental order of convergence for the non-conforming entropy stable scheme using DG-operators of degree two and three.} \label{DGMix2} \end{center} \end{minipage} \qquad\qquad \begin{minipage}{0.4\textwidth} \begin{center}
\begin{tabular}{c|c|c} \hline DOFS & $L_2$ & EOC\\ \hline 912 &2.55E-02&\\ 3648 &2.02E-03&3.7\\ 14592 &1.81E-04&3.5\\ 58368 &1.98E-05&3.2\\ 233472&2.28E-06&3.1\\ \hline \end{tabular}\\[0.1cm] \caption{Experimental order of convergence for the non-conforming entropy stable scheme using DG-operators of degree three and four.} \label{DGMix3} \end{center} \end{minipage} \end{center} \end{table}
We verify a convergence order slightly higher than $p$, where $p=\min\{p_A,p_B,p_C\}$. This result is also documented for non-conforming schemes as in \cite{Friedrich2016} for linear problems. In comparison, conforming schemes have an EOC of $p+1$. The order reduction occurs presumably because of the degree of the projection operators.
Focusing on two elements with SBP operators $(\mat M_A, \mat D_A)$ and $(\mat M_B, \mat D_B)$, where $\mat D_A$ and $\mat D_B$ are of degree $p_A$ and $p_B$. For SBP operators constructed on LGL nodes (DGSEM \cite{Gassner2013}) or on uniform distributed nodes (SBP-SAT finite difference \cite{DCDRF2014}) the norm matrices $\mat M_A$ and $\mat M_B$ can integrate polynomials of degree $2p_A-1$ and $2p_B-1$ exactly. Let $\mat{P}_{A2B}$ denote projection operator of degree $p_1$ and $\mat{P}_{B2A}$ denote the projection operator of degree $p_2$, then Lundquist and Nordstr\"om \cite{Nordstrom2015} proved that \begin{equation} p_1+p_2\le 2p_{min}-1, \end{equation} where $p_{min}=\min\{p_A,p_B\}$. So, when considering non-conforming schemes, not all projection operators can be of degree $p_{min}$. The upper bound of $2p_{min}-1$ is due to the accuracy of the integration matrix. For this reason Friedrich et al. \cite{Friedrich2016} created a special set of SBP-finite difference operators, where the norm matrix can integrate polynomials of degree $>2p_{min}$ exactly. With these operators it is possible to construct projection operators of the same degree as the SBP-operators (degree preservation). The construction of the projection operators is outlined in \cite{Friedrich2016}. Convergence test with these operators are documented in Appendix \ref{sec:App C} and show a full convergence order of $p+1$ in the non-conforming case.
To summarize, the non-conforming entropy stable scheme has the flexibility to chose different nodal distribution aswell as elements of different sizes and obtains an experimental order of convergence of $p$.
\subsection{Verification of Primary and Entropy Conservation/Stability} In this section we numerically verify primary conservation and entropy conservation/stability for the new derived scheme. We first demonstrate entropy conservation which was the result of Theorem \ref{thm: p} and Corollary \ref{cor}. Therefore we consider the entropy conservative flux of Ismail and Roe \cite{Ismail2009} without dissipation. To verify the conservation of entropy, we consider the mesh in Fig. \ref{mesh3}(c) on $\Omega=[0,1]\times[0,1]$ with periodic boundary conditions. For each type of element we consider a DG operators with $p_A=p_C=3$ and $p_B=4$. To calculate the discrete growth in the primary quantities and entropy we rewrite \eqref{eq:splitDG} by \begin{equation}\label{eq:splitDGRes} J\left(\bm{U}_t\right)_{ij}+\bm{Res}\left(\bm{U}_t\right)_{ij} =0, \end{equation} where \begin{equation} \resizebox{\hsize}{!}{$ \begin{aligned} \bm{Res}\left(\bm{U}_t\right)_{ij} =&+ \frac{1}{\mat M_{ii}}\left(\delta_{iN}\left[\tilde{\bm{F}}^{\mathrm{EC}}(1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{Nj}\right] - \delta_{i0}\left[\tilde{\bm{F}}^{\mathrm{EC}}(-1,\eta_j;\hat{n}) - \tilde{\bm{F}}_{0j}\right]\right)+2\sum_{m=0}^N \mat D_{im}\tilde{\bm{F}}^{\mathrm{EC}}\left(\bm{U}_{ij},\bm{U}_{mj}\right)\\ &+ \frac{1}{\mat M_{jj}}\left(\delta_{jN}\left[\tilde{\bm{G}}^{\mathrm{EC}}(\xi_i,1;\hat{n}) - \tilde{\bm{G}}_{iN}\right] - \delta_{j0}\left[\tilde{\bm{G}}^{\mathrm{EC}}(\xi_i,-1;\hat{n}) - \tilde{\bm{G}}_{i0}\right]\right)+2\sum_{m=0}^N \mat D_{jm}\tilde{\bm{G}}^{\mathrm{EC}}\left(\bm{U}_{ij},\bm{U}_{im}\right).\\ \end{aligned}$} \end{equation} The growth in entropy is computed by contracting \eqref{eq:splitDGRes} with the vector of entropy variables, i.e., \begin{equation} \label{eq:splitDGRes2} J\bm{V}_{ij}^T\left(\bm{U}_t\right)_{ij}=-\bm{V}_{ij}^T\bm{Res}\left(\bm{U}_t\right)_{ij} \Leftrightarrow J\left(\Ent_{t}\right)_{ij}=-\bm{V}_{ij}^T\bm{Res}\left(\bm{U}_t\right)_{ij}, \end{equation} where we use the definition of the entropy variables \eqref{eq:entVars} to obtain the temporal derivative, $\left(\Ent_{t}\right)_{ij}$, at each LGL node. As shown in Theorem \ref{thm: diss}, the scheme is primary and entropy conservative when no interface dissipation is included, meaning that \begin{equation} \begin{split} \sum\limits_{\mathrm{all\,elements}} J\! \sum_{i,j=0}^N\omega_i\omega_j \left(\bm{U}_t\right)_{ij}= 0,\\ \sum\limits_{\mathrm{all\,elements}} J\! \sum_{i,j=0}^N\omega_i\omega_j \left(\Ent_{t}\right)_{ij}= 0, \end{split} \end{equation} for all time. We verify this result numerically inserting \eqref{eq:splitDGRes2} and calculate \begin{equation} \begin{split} \bm{\TotCon_t}&:=-\sum\limits_{\mathrm{all\,elements}}~\sum_{i,j=0}^N\omega_i\omega_j \bm{Res}\left(\bm{U}_t\right)_{ij}= 0,\\ \TotEnt_t&:=-\sum\limits_{\mathrm{all\,elements}}~\sum_{i,j=0}^N\omega_i\omega_j\bm{V}_{ij}^T\bm{Res}\left(\bm{U}_t\right)_{ij}, \end{split} \end{equation} using a discontinuous initial condition \begin{equation} \begin{pmatrix} \rho\\ u\\ v\\ p\\ \end{pmatrix} = \begin{pmatrix} \mu_{1,1}\\ \mu_{1,2}\\ \mu_{1,3}\\ \mu_{1,4}\\ \end{pmatrix} \quad\text{ if } x\le y,\qquad \begin{pmatrix} \rho\\ u\\ v\\ p\\ \end{pmatrix} = \begin{pmatrix} \mu_{2,1}\\ \mu_{2,2}\\ \mu_{2,3}\\ \mu_{2,4}\\ \end{pmatrix} \quad\text{ if } x> y. \end{equation} Here $\mu_{k,l}$ are uniformly generated random numbers in $[0,1]$. The random initial condition is chosen to demonstrate entropy conservation independent of the initial condition. We calculate $\bm{\TotCon_t}$ and $\Ent_t$ for $1000$ different initial conditions which gives us $(\TotCon_t)_{lk}$ and $(\TotEnt_t)_{k}$ for $k=1,\dots,1000$ and $l=1,\dots,4$. Within the $L_2$ product we obtain \begin{table}[ht] \begin{center} \textbf{Verification of primary and entropy conservation}\\
\begin{tabular}{c|c|c|c|c} \hline $L_2(\bm{\TotEnt_t})$ & $L_2((\bm{\TotCon_t})_{1,:})$ & $L_2((\bm{\TotCon_t)_{2,:}})$ & $L_2((\bm{\TotCon_t)_{4,:}})$ & $L_2((\bm{\TotCon_t)_{4,:}})$\\ \hline 4.56E-14 & 2.57E-14 & 1.35E-14 & 2.26E-14 & 8.53E-14\\ \hline \end{tabular}\\[0.1cm] \caption{Calculating the growth of the primary quantities $\bm{\TotCon_t}$ and entropy $\bm{\TotCon_t}$ for 1000 different random initial conditions with the new scheme. The growth is presented within the $L_2$ product. All values are near machine precision which demonstrates primary and entropy conservation.} \label{TabEnt1} \end{center} \end{table}
In Table \ref{TabEnt1} we verify primary and entropy conservation. In comparison, when considering the same setup and calculating the numerical flux with the standard mortar method by Kopriva \cite{Kopriva1996b} we verify primary conservation but the method \textit{is not} entropy conservative, see Table \ref{TabEnt2}.
\begin{table}[ht] \begin{center} \textbf{Calculating the growth in the primary quantities and entropy with the standard mortar method}\\
\begin{tabular}{c|c|c|c|c} \hline $L_2(\bm{\TotEnt_t})$ & $L_2((\bm{\TotCon_t})_{1,:})$ & $L_2((\bm{\TotCon_t)_{2,:}})$ & $L_2((\bm{\TotCon_t)_{4,:}})$ & $L_2((\bm{\TotCon_t)_{4,:}})$\\ \hline 3.07E-02 & 5.90E-15 & 2.62E-14 & 6.27E-15 & 1.04E-13\\ \hline \end{tabular}\\[0.1cm] \caption{Calculating the growth of the primary quantities $\bm{\TotCon_t}$ and entropy $\bm{\TotCon_t}$ for 1000 different random initial conditions with the mortar element method. The growth is presented within the $L_2$- product. Here, we verify conservation of the primary quantities but not entropy conservation.} \label{TabEnt2} \end{center} \end{table} Next, we demonstrate the increased robustness of the novel entropy conservative, non-conforming scheme. Therefore, we approximate the total entropy in time by \begin{equation} IS:=\sum\limits_{\mathrm{all\,elements}} J\! \sum_{i,j=0}^N\omega_i\omega_j \left(S\right)_{ij} \end{equation} over the time domain $t\in [0,T]$, where we choose $T=25$ and $CFL=0.5$. For the Euler equations the entropy function is defined by \begin{equation} S=-\frac{\rho}{\gamma-1}\log\left(\frac{p}{\rho^{\gamma}}\right). \end{equation} We solve for the total entropy in time with the low-storage Runge-Kutta time integration method of Carpenter and Kennedy \cite{Kennedy1994} using a discontinuous initial condition \begin{equation} \begin{pmatrix} \rho\\ u\\ v\\ p\\ \end{pmatrix} = \begin{pmatrix} 1.08\\ 0.2\\ 0.01\\ 0.95\\ \end{pmatrix} \quad\text{ if } x\le y,\qquad \begin{pmatrix} \rho\\ u\\ v\\ p\\ \end{pmatrix} = \begin{pmatrix} 1\\ 10^{-12}\\ 10^{-12}\\ 1\\ \end{pmatrix} \quad\text{ if } x> y, \end{equation} and periodic boundaries. Again, we use the new derived method and the classical mortar method \cite{Tan2012,Kopriva1996b}. In Fig. \ref{fig:Mortar} we plot the temporal evolution of the entropy for the standard mortar method against the newly derived scheme.
\begin{figure}
\caption{Evolution of the total entropy comparing the behavior of the standard mortar method against the new scheme derived in this work with an entropy conservative surface flux. We see that the total entropy grows for the mortar method with this test configuration whereas the entropy is conserved for the new scheme. In fact, the mortar method crashes at $t\approx 1$.}
\label{fig:Mortar}
\end{figure}
The new scheme conserves the total entropy. However, for the mortar method we observe an unpredictable behavior of the entropy for $t<1$ and note that at $t\approx 1$ the approach even crashes. This has been verified for the CFL numbers $CFL=0.5;~0.25;~0.125;~0.0625$ and demonstrates the enhanced robustness of entropy conserving/stable schemes.
Finally, we verify the entropy stability and conservation of the primary quantities. Therefore, we include dissipation in a local Lax-Friedrichs sense as described in Sec. \ref{sec:Dissipation}. For this test we use the same configuration as for verifying entropy conservation and set $CFL=0.5$. \begin{figure}
\caption{Evolution of the total entropy of the solution with and without dissipation. We see that the total entropy is conserved when no interface dissipation is included and total energy decays with interface dissipation.}
\label{fig: Ent}
\caption{A plot that demonstrates the conservation of the primary quantities. The plots do not depend on interface dissipation.}
\label{fig: Con}
\end{figure}
In Fig. \ref{fig: Con} we can see that the primary quantities are conserved over time. Note, that in comparison the non-entropy conserving mortar scheme crashes at $t\approx 1$. Also, we note that the plot remains the same whether or not dissipation is included. In Fig. \ref{fig: Ent} we can see that the total entropy remains constant when considering an entropy conservative flux. Therefore, when including dissipation, the total entropy decays which numerically verifies entropy stability.
\section{Conclusion}
In this work we derived a $h/p$ non-conforming primary conservative and entropy stable discontinuous Galerkin spectral element approximation with the summation-by-parts (SBP) property for non-linear conservation laws. We first examined the standard mortar method and found that it did not guarantee entropy conservation/stability for non-linear problems. Hence, we present a modification of the mortar method with special attention given to the projection operators between non-conforming elements. As an extension of the work \cite{Carpenter2016} we extend an entropy stable $p$ non-conforming discretization to a more general $h/p$ non-conforming setup. Neither the nodes nor the interface of two neighboring elements need to coincide in the novel approach. Throughout the derivations in this paper it was required to consider SBP operators, like that for the LGL nodal discontinuous Galerkin spectral method, as these operators mimic the integration-by-parts rule in a discrete manner. To demonstrate the high-order accuracy and entropy conservation/stability of the non-conforming DGSEM we selected the two-dimensional Euler equations. However, we reiterate that the proofs contained herein are general for systems of non-linear hyperbolic conservation laws and directly apply to all diagonal norm SBP operators, as e.g. presented in Appendix \ref{sec:App C}.
\acknowledgement{Lucas Friedrich and Andrew Winters were funded by the Deutsche Forschungsgemeinschaft (DFG) grant TA 2160/1-1. Special thanks goes to the Albertus Magnus Graduate Center (AMGC) of the University of Cologne for funding Lucas Friedrich's visit to the National Institute of Aerospace, Hampton, VA, USA. Gregor Gassner has been supported by the European Research Council (ERC) under the European Union's Eights Framework Program Horizon 2020 with the research project \textit{Extreme}, ERC grant agreement no. 714487. This work was partially performed on the Cologne High Efficiency Operating Platform for Sciences (CHEOPS) at the Regionales Rechenzentrum K\"{o}ln (RRZK) at the University of Cologne.}
\appendix
\section{Derivations of the Growth of Primary Quantities and Entropy}\label{sec:App A} The proof below is the same result as presented by Fisher et al. \cite{Fisher2013}. For completeness, we re-derive the proof in our notation. We analyze the two dimensional discretizaion of \eqref{eq:ECDG} on a single element. \begin{equation} J\omega_i\omega_j\left(\bm{U}_{t}\right)_{ij}+\omega_j\mathcal{L}(\bm{U}_{ij})_x+\omega_i\mathcal{L}(\bm{U}_{ij})_y=0, \end{equation} with \begin{equation} \begin{aligned} \mathcal{L}(\bm{U}_{ij})_x&=2\sum_{m=0}^N\omega_i\mat D_{im}\tilde{\bm{F}}^{\EC}(\uk_{ij},\uk_{mj})-\left(\delta_{iN}[\tilde{\bm{F}}-\tilde{\bm{F}}^{\EC}]_{Nj}-\delta_{i0}[\tilde{\bm{F}}-\tilde{\bm{F}}^{\EC}]_{0j}\right),\\ \mathcal{L}(\bm{U}_{ij})_y&=2\sum_{m=0}^N\omega_j\mat D_{jm}\tilde{\bm{G}}^{\EC}(\uk_{ij},\uk_{im})-\left(\delta_{Nj}[\tilde{\bm{G}}-\tilde{\bm{G}}^{\EC}]_{iN}-\delta_{0j}[\tilde{\bm{G}}-\tilde{\bm{G}}^{\EC}]_{i0}\right). \end{aligned} \end{equation} Assuming, that $\tilde{\bm{F}}^{\EC}$ and $\tilde{\bm{G}}^{\EC}$ satisfy the appropriate entropy condition \eqref{eq:entCondition} \begin{equation} \begin{split} \tilde{\bm{F}}^{\EC}(\uk_{ij},\bm{U}_{ml})^T\left(\bm{V}_{ij}-\bm{V}_{ml}^{q}\right)=\tilde\Psi^f_{ij}-\tilde\Psi^f_{ml},\\ \tilde{\bm{G}}^{\EC}(\uk_{ij},\bm{U}_{ml})^T\left(\bm{V}_{ij}-\bm{V}_{ml}^{q}\right)=\tilde\Psi^g_{ij}-\tilde\Psi^g_{ml}. \end{split} \end{equation} First, we derive the growth of the primary quantities on each element $E_k, k=1,\ldots,K$. Summing over all nodes $i,j=0,\dots,N$ yields \begin{equation} \underbrace{J\sum_{i,j=0}^N\omega_i\omega_j\left(\bm{U}_{t}\right)_{ij}}_{\approx\int \bm{U}_t \,\mathrm{d}E}+\sum_{j=0}^N\omega_j\sum_{i=0}^N\mathcal{L}(\bm{U}_{ij})_x+\sum_{i=0}^N\omega_i\sum_{j=0}^N\mathcal{L}(\bm{U}_{ij})_y=0, \end{equation} with \begin{equation} \sum_{i=0}^N\mathcal{L}(\bm{U}_{ij})_x=2\sum_{i=0}^N\sum_{m=0}^N\mat{Q}_{im}\tilde{\bm{F}}^{\EC}(\uk_{ij},\uk_{mj})-[\tilde{\bm{F}}-\tilde{\bm{F}}^{\EC}]_{Nj}+[\tilde{\bm{F}}-\tilde{\bm{F}}^{\EC}]_{0j}. \end{equation} Using the SBP property of the matrices $2\mat Q=\mat Q-\mat Q^T+\mat B$ we find \begin{equation} \sum_{i=0}^N\mathcal{L}(\bm{U}_{ij})_x=\sum_{i=0}^N\sum_{m=0}^N \mat{Q}_{im}\tilde{\bm{F}}^{\EC}(\uk_{ij},\uk_{mj})-\sum_{i=0}^N\sum_{m=0}^N\mat{Q}_{im}\tilde{\bm{F}}^{\EC}(\uk_{mj},\uk_{ij})+\tilde{\bm{F}}^{\EC}_{Nj}-\tilde{\bm{F}}^{\EC}_{0j}. \end{equation} Due to nearly skew-symmetric nature of $\mat{Q}$ we arrive at \begin{equation} \sum_{i=0}^N\mathcal{L}(\bm{U}_{ij})_x=\tilde{\bm{F}}^{\EC}_{Nj}-\tilde{\bm{F}}^{\EC}_{0j}. \end{equation} And similar for $\sum\limits_{j=0}^N\mathcal{L}(\bm{U}_{ij})_y$ we have \begin{equation} \sum_{j=0}^N\mathcal{L}(\bm{U}_{ij})_x=\tilde{\bm{G}}^{\EC}_{iN}-\tilde{\bm{G}}^{\EC}_{i0}. \end{equation} Together both directions yield \begin{equation} \underbrace{J\sum_{i,j=0}^N\omega_i\omega_j\left(\bm{U}_{t}\right)_{ij}}_{\approx\int \bm{U}_t \,\mathrm{d}E}=-\sum_{j=0}^N\omega_j\left(\tilde{\bm{F}}^{\EC}_{Nj}-\tilde{\bm{F}}^{\EC}_{0j}\right)-\sum_{i=0}^N\omega_i\left(\tilde{\bm{G}}^{\EC}_{iN}-\tilde{\bm{G}}^{\EC}_{i0}\right), \end{equation} which is precisely \eqref{ThmdU}.
Next, we derive the entropy growth on the single element. To do so, we pre-multiply with the entropy variables and sum over all nodes to get \begin{equation} J\sum_{i,j=0}^N\omega_i\omega_j\underbrace{\bm{V}_{ij}^T\left(\bm{U}_{t}\right)_{ij}}_{=:\left(\Ent_{t}\right)_{ij}}+\sum_{j=0}^N\omega_j\sum_{i=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_x + \sum_{i=0}^N\omega_j\sum_{j=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_y=0, \end{equation} with \begin{equation} \begin{aligned} \sum_{i=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_x=&2\sum_{i=0}^N\sum_{m=0}^N\mat{Q}_{im}\bm{V}_{ij}^T\tilde{\bm{F}}^{\EC}(\uk_{ij},\uk_{mj})-\left[\bm{V}^T\tilde{\bm{F}}-\bm{V}^T\tilde{\bm{F}}^{\EC}\right]_{Nj}+\left[\bm{V}^T\tilde{\bm{F}}+\bm{V}^T\tilde{\bm{F}}^{\EC}\right]_{0j}. \end{aligned} \end{equation} Again using $2\mat Q=\mat Q-\mat Q^T+\mat B$ we have \begin{equation} \begin{aligned} \sum_{i=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_x=&\sum_{i=0}^N\sum_{m=0}^N\mat{Q}_{im}\bm{V}_{ij}^T\tilde{\bm{F}}^{\EC}(\uk_{ij},\uk_{mj})-\sum_{i=0}^N\sum_{m=0}^N\mat{Q}_{im}\bm{V}_{ji}^T\tilde{\bm{F}}^{\EC}(\uk_{mj},\uk_{ij})\\ &+\bm{V}_{Nj}^T\tilde{\bm{F}}^{\EC}_{Nj}-\bm{V}^T_{0j}\tilde{\bm{F}}^{\EC}_{0j}. \end{aligned} \end{equation} Due to entropy conservation condition \eqref{eq:entCondition} and the consistency of the derivative matrix, i.e. $\mat D\bm{1}=\bm 0~\left(\Leftrightarrow\mat Q\bm{1}=\bm 0\right)$ we find \begin{equation} \begin{aligned} \sum_{i=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_x =&\sum_{i=0}^N\sum_{m=0}^N\mat{Q}_{im}\underbrace{\left(\left(\tilde{\bm{F}}^{\EC}(\uk_{ij},\uk_{mj})\right)^T\left(\bm{V}_{ij}-\bm{V}_{ji}\right)\right)}_{=\tilde{\Psi}^f_{ij}-\tilde{\Psi}^f_{mj}}+\bm{V}_{Nj}^T\tilde{\bm{F}}^{\EC}_{Nj}-\bm{V}_{0j}^T\tilde{\bm{F}}^{\EC}_{0j},\\ =&\sum_{i=0}^N\tilde{\Psi}^f_{ij}\underbrace{\sum_{m=0}^N\mat{Q}_{im}}_{=0}-\sum_{i=0}^N\sum_{m=0}^N\underbrace{\mat{Q}_{im}}_{\mat{B}_{im}-\mat{Q}_{mi}}\tilde{\Psi}^f_{mj}+\bm{V}_{Nj}^T\tilde{\bm{F}}^{\EC}_{Nj}-\bm{V}_{0j}^T\tilde{\bm{F}}^{\EC}_{0j},\\ =&-\sum_{i=0}^N\sum_{m=0}^N\mat{B}_{im}\tilde{\Psi}^f_{mj}+\sum_{m=0}^N\tilde{\Psi}^f_{mj}\underbrace{\sum_{i=0}^N\mat{Q}_{mi}}_{=0}+\bm{V}_{Nj}^T\tilde{\bm{F}}^{\EC}_{Nj}-\bm{V}_{0j}^T\tilde{\bm{F}}^{\EC}_{0j},\\ =&\left(\bm{V}_{Nj}^T\tilde{\bm{F}}^{\EC}_{Nj}-\tilde{\Psi}^f_{Nj}\right)-\left(\bm{V}_{0j}^T\tilde{\bm{F}}^{\EC}_{0j}-\tilde{\Psi}^f_{0j}\right). \end{aligned} \end{equation} And similar for $\sum\limits_{j=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_y$ we get \begin{equation} \sum_{j=0}^N\bm{V}_{ij}^T\mathcal{L}(\bm{U}_{ij})_y=\left(\bm{V}_{iN}^T\tilde{\bm{G}}^{\EC}_{iN}-\tilde{\Psi}^g_{iN}\right)-\left(\bm{V}_{i0}^T\tilde{\bm{G}}^{\EC}_{i0}-\tilde{\Psi}^g_{i0}\right) \end{equation} Both directions together yield \begin{equation} \begin{split} \underbrace{J\sum_{i,j=0}^N\omega_i\omega_j\left(\Ent_{t}\right)_{ij}}_{\approx\int \Ent_t \,\mathrm{d}E}=&-\sum_{j=0}^N\omega_j\left(\left(\bm{V}_{Nj}^T\tilde{\bm{F}}^{\EC}_{Nj}-\tilde{\Psi}^f_{Nj}\right)-\left(\bm{V}_{0j}^k\tilde{\bm{F}}^{\EC}_{0j}-\tilde{\Psi}^f_{0j}\right)\right)\\ &-\sum_{i=0}^N\omega_i\left(\left(\bm{V}_{iN}^T\tilde{\bm{G}}^{\EC}_{iN}-\tilde{\Psi}^g_{iN}\right)-\left(\bm{V}_{i0}^T\tilde{\bm{G}}^{\EC}_{i0}-\tilde{\Psi}^g_{i0}\right)\right), \end{split} \end{equation} which is precisely \eqref{ThmdEnt}.
\section{Projection operators for Discontinuous Galerkin methods}\label{sec:App B}
The projection operators for DG methods are constructed with the \textit{Mortar Element Method} by Kopriva \cite{Kopriva2002}.
Here we assume two neighboring elements with a single coinciding interface as in Fig. \ref{Confmesh}. Let $N,M$ denote the polynomial degrees of both elements with corresponding one-dimensional nodes $x_0^N,\dots,x_N^N$ and $x_0^M,\dots,x_M^M$ and integration weights $\omega_0^N,\dots,\omega_N^N$ and $\omega_0^M,\dots,\omega_M^M$. The corresponding norm matrices are defined as $\mat M_N=\diag(\omega_0^N,\dots,\omega_N^N)$ and $\mat M_M=\diag(\omega_0^M,\dots,\omega_M^M)$ and each element is equipped with a set of Lagrange basis functions $l_0^N,\dots,l_N^N$ and $l_0^M,\dots,l_M^M$.
As for the elements, the mortar also consists of a set of nodes, integration weights, norm matrix and Lagrange basis functions. Without lose of generality, we assume $N<M$. Therefore, the polynomial order on the mortar $N_{\Xi} = \max\{N,M\} = M$. So the mortar will simply copy the solution data form the element with the higher polynomial degree $M$ because the nodal distributions are identical. Thus, the projection operator from the element to the mortar as well as back from the mortar to the element are simply the identity matrix of size $M$, i.e. $\mat P_{M2\Xi}=\mat I^M$ and $\mat P_{\Xi2M}=\mat I^M$. Next, we briefly describe how to project the element of degree $N$ to the mortar and back.
\textbf{Step 1 (Projection from element of degree $N$ to the mortar):} Assume we have a discrete evaluated function $\bm f=(f_0,\dots,f_N)^T$ with $f(x)=\sum_{i=0}^N\ell_i^N(x)f_i$. We want to project this function onto the mortar to obtain $\bm{f}^{\Xi}=(f^{\Xi}_0,\dots, f^{\Xi}_{M})^T$ with $ f^{\Xi}(x)=\sum_{j=0}^{M}\ell_j^M(x) f^{\Xi}_j$. Note, that ${f}(x)\neq {f}^\Xi(x)$ for a polynomial of higher degree. In \cite{Kopriva2002} the operator $P_{N2\Xi}$ is created by a $L_2$ projection on the mortar \begin{equation}\label{App projection} \ip{f}{L_2}{\ell_j^M} = \ip{f^{\Xi}}{L_2}{\ell_j^M} \Leftrightarrow \sum_{i=0}^N\ip{\ell_i^N}{L_2}{\ell_j^M}f_i=\sum_{i=0}^M\ip{\ell_i^M}{L_2}{\ell_j^M}\ f^\Xi_i, \end{equation} for $j=0,\dots,M$. Here, the $L_2$ inner products are evaluated discretely using the appropriate norm matrices. The $L_2$ inner product on the left in \eqref{App projection} is evaluated exactly due to the high-order nature of the LGL quadrature and the assumption that $N<M$. Therefore, using $M$-LGL nodes and weights we have \begin{equation}\label{App left} \ip{\ell_i^N}{L_2}{\ell_j^M} = \ip{\ell_i^N}{M}{\ell_j^M} = \sum_{k=0}^M\omega_k^M\ell_i^N(x_k^M)\ell_j^M(x_k^M)=\sum_{k=0}^M\omega_k^M\ell_i^N(x_k^M)\delta_{jk}=\omega_j^M\ell_i^N(x_j^M), \end{equation} for $i=0,\ldots,N$, $j=0,\ldots,M$ and use the Kronecker delta property of the Lagrange basis. On the right side of \eqref{App projection} we evaluate an inner product of two polynomial basis functions of order $M$. Therefore, due to the exactness of the LGL quadrature, the $L_2$ inner product is approximated by an integration rule with mass lumping, e.g. \cite{Tan2012}, \begin{equation}\label{App right} \ip{\ell_i^M}{L_2}{\ell_j^M} \approx \ip{\ell_i^M}{M}{\ell_j^M} = \sum_{k=0}^M\omega_k^M\ell_i^M(x_k^M)\ell_j^M(x_k^M)=\sum_{k=0}^M\omega_k^M\delta_{ik} \delta_{jk} = \delta_{ij}\omega_j^M, \end{equation} for $i,j=0,\ldots,M$. Next, we define \textit{interpolation} operators \begin{equation} [\mat L_{N2\Xi}]_{ij}:= \ell_j^N(x_i^M), \end{equation} with $i=0,\dots,N$ and $j=0,\dots,M$ to rewrite \eqref{App projection} in a compact matrix-vector notation \begin{equation} \mat M_M\mat L_{N2\Xi} \bm f=\mat M_M\bm{f}^\Xi \Leftrightarrow \underbrace{\mat L_{N2\Xi}}_{:= \mat{P}_{N2\Xi}}\bm f=\bm{f}^\Xi. \end{equation} So the projection operator to move the solution from the element with $N$ nodes onto the mortar is equivalent to an interpolation operator. However, this does not hold for projecting the solution from the mortar back to the element.
\textbf{Step 2 (Projection from the mortar to element of degree $N$):} To construct the operator $\mat{P}_{\Xi 2N}$ we consider the $L_2$ projection from the mortar back to an element with $N$ nodes. Here, we assume a discrete evaluation of the solution on the mortar $\bm f^\Xi=(f_0^\Xi,\dots,f_M^\Xi)^T$ with $f^\Xi(x)=\sum_{i=0}^M \ell_i^M(x)f_i^\Xi$ and seek the solution on the element $\bm{f}=( f_0,\dots, f_N)^T$ with $ f(x)=\sum_{i=0}^N \ell_i^N(x) f_i$. The $L_2$ projection back to the element is \begin{equation}\label{AppBackprojection} \ip{f^{\Xi}}{L_2}{\ell_j^N}=\ip{f}{L_2}{\ell_j^N} \Leftrightarrow\sum_{i=0}^M \ip{\ell_i^M}{L_2}{\ell_j^N}f_i^\Xi=\sum_{i=0}^N\ip{\ell_i^N}{L_2}{\ell_j^N} f_i, \end{equation} for $j=0,\dots,N$. The $L_2$ inner product on the left in \eqref{AppBackprojection} is computed exactly using $M$-LGL points and the $L_2$ inner product on the right in \eqref{AppBackprojection} is approximated with mass lumping at $N$-LGL nodes. Thus, we obtain \begin{equation} \ip{\ell_i^M}{L_2}{\ell_j^N}=\ip{\ell_i^M}{M}{\ell_j^N}=\sum_{k=0}^M\omega_k^M \ell_i^M(x_k^M) \ell_j^N(x_k^M)=\omega_i^M \ell_j^N(x_i^M), \end{equation} where $i=0,\ldots,M$, $j = 0,\ldots,N$ and \begin{equation} \ip{\ell_i^N}{L_2}{\ell_j^N}\approx\ip{\ell_i^N}{N}{\ell_j^N}=\sum_{k=0}^N\omega_i^N \ell_i^N(x_k^N) \ell_j^N(x_k^N)=\delta_{ij}\omega_j^N, \end{equation} for $i,j=0,\ldots,N$. Again, we write \eqref{AppBackprojection} in a compact matrix-vector notation which gives us \begin{equation} \mat L_{N2\Xi}^T\mat M_M\bm f^\Xi=\mat M_N\bm{f}. \end{equation} As $\mat L_{N2\Xi}=\mat P_{N2\Xi}$ we obtain \begin{equation} \underbrace{\mat M_N^{-1}\mat P_{N2\Xi}^T\mat M_M}_{:=\mat P_{\Xi2N}}\bm f^\Xi=\bm{f}, \end{equation} where we introduce the projection operator (not interpolation operator) from the mortar back to the element with $N$ nodes. With this approach we constructed projection operators satisfying the $\mat M$-compatibility condition \eqref{7}, i.e., \begin{equation} \mat P_{\Xi2N}=\mat M_N^{-1}\mat P_{N2\Xi}^T\mat M_M\Leftrightarrow \mat M_N\mat P_{\Xi2N}=\mat P_{N2\Xi}^T\mat M_M. \end{equation}
By combining the operators, we can construct projections which directly move the solution from one element to another (in some sense ``hiding'' the mortar) to be \begin{equation} \begin{split} \mat P_{N2M}&=\mat{P}_{\Xi2M}\mat{P}_{N2\Xi},\\ \mat P_{M2N}&=\mat{P}_{\Xi2N}\mat{P}_{M2\Xi}. \end{split} \end{equation} Note, that in this paper we only consider LGL-nodes for the approximating the $L_2$-projection. However, the approach in \cite{Kopriva2002} is more general as it also considers Legendre Gauss nodes. Also, the construction of the projection operators on interfaces with hanging nodes is briefly discussed.
\section{Experimental Order of Convergence - Degree Preserving Element based Finite Difference Operators}\label{sec:App C}
Besides Discontinuous Galerkin SBP operators, we analyze the convergence of degree preserving, element based finite difference operators (DPEBFD) operators. As described in \cite{Friedrich2016} these operators are SBP operators by construction, for which our entropy stable discretization remains stable. The norm matrix of the DPEBFD operator integrates polynomials up to degree of $2p+1$ exactly, where $p$ denotes the minimum polynomial degree of all elements. In comparison to SBP finite difference operators as in \cite{DCDRF2014} these operators are element based, meaning that the number of nodes is fixed as for DG operators.
As we focus on elements with SBP operators of the same degree, we set all elements to be DPEBFD elements with degree $p$ where $p=2,3$. To approximate the convergence order of the non-conforming discretization, we consider the same mesh refinement strategy as in Fig. \ref{mesh3} with element types $A,B,B$. These types are set up in the following way: \begin{itemize} \item Element $A$ with $22$ nodes in $x$- and $y$-direction,\\ \item Element $B$ with $24$ nodes in $x$- and $y$-direction,\\ \item Element $C$ with $22$ nodes in $x$- and $y$-direction.\\ \end{itemize} This leads to a mesh considering $h/p$ refinement. Here, we obtain the results in Tables \ref{DP2}-\ref{DP3}
\begin{table}[ht] \begin{center} \textbf{DPEBFD SBP operators}\\
\begin{minipage}{0.4\textwidth} \begin{center}
\begin{tabular}{c|c|c} \hline DOFS & $L_2$ & EOC\\ \hline 6176 &6.09E-01&\\ 24704 &1.60E-01&1.9\\ 98816 &2.44E-02&2.7\\ 395264 &3.11E-03&3.0\\ 1581056&3.97E-04&3.0\\ \hline \end{tabular}\\[0.1cm] \caption{Experimental order of convergence for DPEBFD operators with $p=2$.} \label{DP2} \end{center} \end{minipage} \qquad \begin{minipage}{0.4\textwidth} \begin{center}
\begin{tabular}{c|c|c} \hline DOFS & $L_2$ & EOC\\ \hline 768 &3.47E-01&\\ 3072 &8.70E-02&2.0\\ 12288 &6.83E-03&3.7\\ 49152 &4.69E-04&3.9\\ 196608&2.99E-05&4.0\\ \hline \end{tabular}\\[0.1cm] \caption{Experimental order of convergence for DPEBFD operators with $p=3$.} \label{DP3} \end{center} \end{minipage} \end{center} \end{table}
As documented in \cite{Friedrich2016} we numerically verify an EOC of $p+1$. So when considering degree preserving SBP operators our entropy stable non-conforming method can handle $h/p$ refinement and possesses full order. However when considering DG-operators we obtain a smaller $L_2$ error for a more coarse mesh. We do not claim, that DPEBFD operators have the best error properties, but considering these operators is a possible cure for retaining a full order scheme. The development of optimal degree preserving SBP operators is left for future work.
\end{document} | arXiv |
\begin{document}
\maketitle
\begin{abstract} We give a criterion whether given Eisenstein polynomials over a local field $K$ define the same extension over $K$ in terms of a certain non-Archimedean metric on the set of polynomials. The criterion and its proof depend on ramification theory. \end{abstract}
\section{Introduction} \quad Let $K$ be a complete discrete valuation field, $k$ its residue field (which may be imperfect) of characteristic $p>0$, $v_K$ its valuation normalized by $v_K(K^{\times})=\mathbb{Z}$, $\mathcal{O}_K$ its valuation ring, $\Omega$ a fixed algebraic closure of $K$ and $\bar{K}$ the separable closure in $\Omega$. The valuation $v_K$ can be extended to $\Omega$ uniquely and the extension is also denoted by $v_K$. Let $L/K$ be a finite Galois extension with ramification index $e$ and inertia degree $1$. Denote by $\mathcal{O}_L$ the integral closure of $\mathcal{O}_K$ in $L$. Take a uniformizer $\pi_L$ of $\mathcal{O}_L$ and its minimal polynomial $f$ over $K$, which is an Eisenstein polynomial over $K$. Let $E_K^e$ be the set of all Eisenstein polynomials over $K$ of degree $e$. For two polynomials $g=\sum a_i X^i, h=\sum b_i X^i \in E_K^e$, we put \[ v_K(g,h):=\min_{0 \leq i \leq e-1} \{v_K(a_i - b_i) + \frac{i}{e} \}.\] Then the function $v_K(\cdot,\cdot)$ defines a non-Archimedean metric on $E_K^e$ (\emph{cf}.\ Lem.\ \ref{DistanceEisenstein}). For any $g \in E_K^e$, we put $M_g=K(\pi_g)$, where $\pi_g$ is a root of $g$. For any real number $m \geq 0$, we consider the following property:
\begin{quote} \begin{itemize} \item[$(\mathrm{T}^e_m)$] \emph{For any $g \in E_K^e$, if $v_K(f,g) \geq m$, then there exists a $K$-isomorphism $L \cong M_g$}. \end{itemize} \end{quote}
This property does not depend on the choice of $\pi_L$ (\emph{cf}.\ Prop.\ \ref{Tem=Pem}). Let $u_{L/K}$ be the largest upper numbering ramification break of $L/K$ in the sense of \cite{Fontaine85} (\emph{cf}.\ Sect.\ \ref{Ramification}). Then we can prove the following (Prop.\ \ref{KrasnerLemmaRenewB}):
\begin{proposition} The property $(\mathrm{T}^e_m)$ is true for $m > u_{L/K}$, and is not true for $m \leq u_{L/K} - e^{-1}$. \end{proposition}
This proposition is a consequence of results of Fontaine on a certain property $(\mathrm{P}_m)$ (\emph{cf}.\ Appendix). Since both $v_K(f,g)$ and $u_{L/K}$ are in $e^{-1} \mathbb{Z}$, the truth of $(\mathrm{T}^e_m)$ is constant for $u_{L/K}-e^{-1} < m \leq u_{L/K}$. Therefore, we want to know the truth of $(\mathrm{T}^e_m)$ for $m=u_{L/K}$. The property $(\mathrm{T}^e_m)$ behaves mysteriously at the break $m=u_{L/K}$. It depends on the ramification of $L/K$ and the residue field $k$. Our main theorem in this paper is the following (Cor.\ \ref{CorMainTheoremB}):
\noindent {\bf Theorem A}. \emph{If $L/K$ is tamely ramified, then $(\mathrm{T}^e_m)$ is true for $m=u_{L/K}$ if and only if the residue field $k$ has no cyclic extension of degree $e$. If $L/K$ is wildly ramified, then $(\mathrm{T}^e_m)$ is true for $m=u_{L/K}$ if and only if the residue field $k$ has no cyclic extension of degree $p$}.
\noindent We reduce the proof of this theorem to the abelian case by showing that $(\mathrm{T}^e_m)$ is equivalent to a certain property $(\mathrm{P}^e_m)$, which has such a reduction property (Prop.\ \ref{reduce}). To prove the abelian case, we show that, by using the properties of the ultrametric space $E_K^e$, the truth of the property $(\mathrm{T}^e_m)$ for $m=u_{L/K}$ is equivalent to the surjectivity of the norm map $N_{m-1}:U_L^{\psi(m-1)}/U_L^{\psi(m-1)+1} \to U_K^{m-1}/U_K^m$ between the graded quotients of the higher unit groups of $L$ and $K$, where $\psi$ is the Hasse-Herbrand function of $L/K$ (Prop.\ \ref{ProofAbelianCase}). Finally, we calculate its cokernel by using the well-known exact sequence (Prop.\ \ref{CokerNorm}) \[0 \to G_{\psi(m-1)}/G_{\psi(m-1)+1} \to U_L^{\psi(m-1)}/U_L^{\psi(m-1)+1} \overset{N_{m-1}}{\longrightarrow} U_K^{m-1}/U_K^m, \] where $G_{i}$ is the $i$th lower numbering ramification group in the sense of \cite{Serre79} (\emph{cf}.\ Rem.\ \ref{Numbering}). The vanishing of Coker($N_{m-1}$) is equivalent to the conditions in Theorem A.
Our results are useful for computations to construct explicit extensions over $K$ which satisfy given conditions. For example, such computations are required in \cite{Suzuki}, \cite{Yoshida11} and \cite{Yoshida12}. Indeed, the proof of Proposition 5.1,\ (1) in \cite{Suzuki} is based on our results. In \cite{Yoshida11} and \cite{Yoshida12}, our approaches are used to identify totally ramified extensions over $\mathbb{Q}_p$.
\noindent \emph{Plan of this paper}. In Section \ref{Ramification}, we give a review of the classical ramification theory. In Section \ref{Distance}, we recall a notion of ultrametric space on polynomials. In Section \ref{ThePropertyTem}, we define the property $(\mathrm{T}^e_m)$, which is the main object in this paper. In Section \ref{MainTheorem}, we state our main theorem and its consequences. In Section \ref{ProofMainTheorem}, we prove the main theorem. In the Appendix, we consider similar properties $(\mathrm{P}'_m)$ and $(\mathrm{P}_m)$. To remove confusion, we clarify the relation between the four properties which appear in this paper: \[ (\mathrm{P}_m) \iff (\mathrm{P}'_m) \Longrightarrow (\mathrm{P}^e_m) \iff (\mathrm{T}^e_m), \] where the last equivalence requires the condition $m>1$.
\noindent \emph{Notations}. We fix an algebraic closure $\Omega$ of $K$ and denote by $\bar{K}$ the separable closure of $K$ in $\Omega$. We denote by $\mathcal{O}_K$, $\mathfrak{m}_K$, $\pi_K$ and $v_K$, respectively, the valuation ring of $K$, its maximal ideal, a uniformizer of $K$ and the valuation on $K$ normalized by $v_K(K^{\times})=\mathbb{Z}$. We assume throughout that all algebraic extensions of $K$ under discussion are contained in $\Omega$. The valuation $v_K$ of $K$ extends to $\Omega$ uniquely and the extension is also denote by $v_K$. If $M$ is an algebraic extension of $K$, then we denote by $\mathcal{O}_M$ the integral closure of $\mathcal{O}_K$ in $M$, and by $\mathfrak{m}_M$ the maximal ideal of $\mathcal{O}_M$. For any integer $n \geq 1$, we put $U_K^n=1+\mathfrak{m}_K^n$ and $U_L^n=1+\mathfrak{m}_L^n$. Put $U_K^0=\mathcal{O}_K^{\times}$ and $U_L^0=\mathcal{O}_L^{\times}$ by convention.
\noindent \emph{Conventions}. Throughout this paper, we assume that $L/K$ is unferociously ramified\footnote{ We mean by an \emph{unferociously} ramified extension an algebraic extension whose residue field extension is separable. } extension. We do not consider the trivial case $L=K$.
\noindent \emph{Acknowledgments}.\ The author thanks Takashi Suzuki for communicating Proposition \ref{CokerNorm} to him. He also thanks Kazuya Kato for suggesting an improvement of the proof of Lemma \ref{hominjective}. Finally, he thanks Yuichiro Taguchi for pointing out Remark \ref{separability}.
\section{Ramification theory}\label{Ramification}
\quad In this section, we recall the classical ramification theory for Galois extensions of $K$. Our notations are based on \cite{Fontaine85}, Section 1. Let $L$ be a finite Galois extension of $K$ with Galois group $G$. There exists an element $\alpha \in \mathcal{O}_L$ such that $\mathcal{O}_L = \mathcal{O}_K[\alpha]$ (the existence of such an element is proved in \cite{Serre79}, Chap.\ III, Sect.\ 6, Prop.\ 12). The order function $\textbf{i}_{L/K}$ is defined on $G$ by
\[\textbf{i}_{L/K}(\sigma)=\inf_{a \in \mathcal{O}_L}
v_K(\sigma(a)-a) = v_K(\sigma \alpha - \alpha)\] for any $\sigma \in G$. Then the $i$th \emph{lower numbering ramification group $G_{(i)}$ of $G$} are defined for a real number $i \geq 0$ by
\[G_{(i)}=\{ \sigma \in G\ |\ \textbf{i}_{L/K}(\sigma) \geq i \}.\] The transition function $\widetilde{\varphi}_{L/K}:\mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ of $L/K$ is defined by
\[\widetilde{\varphi}_{L/K}(i)=\int_0^i \sharp G_{(t)} dt\] for any real number $i \geq 0$, where $\sharp G_{(t)}$ is the cardinality of $G_{(t)}$. This is a piecewise-linear, monotone increasing function, mapping the interval $[0,+\infty)$ onto itself. Its inverse function is denoted by $\widetilde{\psi}_{L/K}$. The following lemma is a fundamental property of these functions:
\begin{lemma}[\cite{Fontaine85}, Prop.\ 1.4]\label{Herbrand} Let $L$ be a finite Galois extension of $K$. Let $f$ be the minimal polynomial of $\alpha$ over $K$ and $\beta$ an element of $\Omega$. Put $i = \sup_{\sigma \in G} v_K(\sigma (\alpha) - \beta)$ and $u= v_K(f(\beta))$. Then we have \[ u = \widetilde{\varphi}_{L/K}(i),\quad \widetilde{\psi}_{L/K}(u)=i. \] \end{lemma}
We define the order function $\textbf{u}_{L/K}$ on $G$ by \[\textbf{u}_{L/K}(\sigma)=\widetilde{\varphi}_{L/K}(\textbf{i}_{L/K}(\sigma)) \] for any $\sigma \in G$. Then the $u$th \emph{upper numbering ramification group $G^{(u)}$ of $G$} are defined for a real number $u \geq 0$ by
\[G^{(u)}=\{\sigma \in G\ |\ \textbf{u}_{L/K}(\sigma) \geq u \}.\]
\begin{remark}\label{Numbering} Denote by $G_i$, $G^u$, $\varphi_{L/K}$ and $\psi_{L/K}$, respectively, the $i$th lower numbering ramification group, the $u$th upper numbering ramification group, the transition function and its inverse function in the sense of \cite{Serre79}, Chapter IV. The relation between our notations and those of \cite{Serre79} is the following: For any real number $i, u \geq -1$, we have \[ G_i = G_{((i+1)/e)},\quad G^u = G^{(u+1)} \] and \[ \varphi_{L/K}(i) = \widetilde{\varphi}_{L/K}((i+1)/e)-1,\quad \psi_{L/K}(u) = e \widetilde{\psi}_{L/K}(u+1)-1, \] where $e$ is the ramification index of $L/K$. \end{remark}
Denote the largest lower (resp.\ upper) numbering ramification break by
\[ i_{L/K}=\inf \{ i \in \mathbb{R}\ |\ G_{(i)}=1 \},\quad
u_{L/K}=\inf \{ u \in \mathbb{R}\ |\ G^{(u)}=1 \}. \]
The graded quotients of $(G^{(u)})_{u \geq 1}$ are abelian and killed by $p$ (\cite{Serre79}, Chap.\ IV, Sect.\ 2, Cor.\ 3). In particular, $G^{(u)}$ is abelian and killed by $p$ for $u=u_{L/K}$ if $L/K$ is wildly ramified.
We assume that $L/K$ is totally ramified. If there is no confusion, we write $\psi=\psi_{L/K}$ for simplicity.
\begin{proposition}[\cite{Serre79}, Chap.\ V, Prop.\ 8] For any integer $n \geq 0$, $N(U_L^{\psi(n)}) \subset U_K^n$ and $N(U_L^{\psi(n)+1}) \subset U_K^{n+1}$, where $N:=N_{L/K}$ is the norm map. \end{proposition}
This proposition allows us, by passage to the quotient, to define the homomorphisms \[ N_n:U_L^{\psi(n)}/U_L^{\psi(n)+1} \to U_K^n/U_K^{n+1} \quad (n \geq 0). \] The homomorphism $N_n$ is a non-constant polynomial map (\cite{Serre79}, Chap.\ V, Sect.\ 6, Prop.\ 9).
\begin{proposition}[\cite{Serre79}, Chap.\ V, Sect.\ 6, Prop.\ 9]\label{exact} For any integer $n \geq 0$, the following sequence is exact: \[0 \to G_{\psi(n)}/G_{\psi(n)+1} \onto{\theta_n} U_L^{\psi(n)}/U_L^{\psi(n)+1} \overset{N_n}{\longrightarrow} U_K^n/U_K^{n+1}, \] where $\theta_n$ is defined by $\sigma \mapsto \sigma(\pi_L)/\pi_L$. \end{proposition}
\begin{remark}\label{separability} The polynomial $N_n$ is separable since $\theta_n$ is injective. Hence if the residue field $k$ is separably closed, then we have \[ 0 \to G_{\psi(n)}/G_{\psi(n)+1} \onto{\theta_n} U_L^{\psi(n)}/U_L^{\psi(n)+1} \overset{N_n}{\longrightarrow} U_K^n/U_K^{n+1} \to 0. \] \end{remark}
\section{An ultrametric space of monic irreducible polynomials}\label{Distance}
\quad In this section, we define a non-Archimedean metric on the set, denoted by $P_K$, of all monic irreducible polynomials over $K$. For $f,g \in P_K$, we denote by $\mathrm{Res}(f,g)$ the resultant of $f$ and $g$. Then $v_K(\mathrm{Res}(\cdot,\cdot))$ defines a non-Archimedean metric on $P_K$ (see \cite{Krasner66} or \cite{Pauli01}, Sect.\ 4 for the proofs). It is well-known that \[ v_K(\mathrm{Res}(f,g)) = \sum_{i,j} v_K(\alpha_i - \beta_j) = \mathrm{deg}(f) v_K(f(\beta)) = \mathrm{deg}(g) v_K(g(\alpha)) , \] where $\alpha_i$ (resp.\ $\beta_j$) runs through the all roots of $f$ (resp.\ $g$) and $\alpha$ (resp.\ $\beta$) is a root of $f$ (resp.\ $g$). The third and forth presentations are independent of the choice of roots by the irreducibility of polynomials. Denote by $E_K^e$ the set of all Eisenstein polynomials over $K$ of degree $e$. For $f$, $g \in E_K^e$, we put \[ v_K(f,g)=e^{-1} v_K(\mathrm{Res}(f,g))=v_K(f(\pi_g)), \] where the last equality follows from the above equality. Then the function $v_K(\cdot,\cdot)$ also defines a non-Archimedean metric on $E_K^e$. There is a useful formula to calculate the metric on $E_K^e$:
\begin{lemma}[\cite{Krasner66}]\label{DistanceEisenstein} Let $f,g \in E_K^e$. Write $f(X)$ $=$ $X^e$ $+$ $a_{e-1}X^{e-1}$ $+$ $\cdots$ $+$ $a_0$ and $g(X)$ $=$ $X^e$ $+$ $b_{e-1}X^{e-1}$ $+$ $\cdots$ $+$ $b_0$. Then we have \[ v_K(f,g) = \min_{0 \leq i \leq e-1} \{ v_K(a_i - b_i) + \frac{i}{e} \}. \] \end{lemma}
\section{The property $(\mathrm{T}^e_m)$}\label{ThePropertyTem}
In this section, we define the property $(\mathrm{T}^e_m)$ and determine the truth for any real number $m \geq 0$ except a neighborhood of the break $u_{L/K}$. The proofs in this section essentially depend on \cite{Fontaine85}, Proposition 1.5. Let $L/K$ be a finite Galois totally ramified extension of degree $e$. Take a uniformizer $\pi_L$ of $L$. Let $f \in E_K^e$ be the minimal polynomial of $\pi_L$ over $K$. For any $g \in E_K^e$, we put $M_g=K(\pi_g)$, where $\pi_g$ is a root of $g$. For any real number $m \geq 0$, we consider the following property:
\begin{quote} \begin{itemize} \item[$(\mathrm{T}^e_m)$] For any $g \in E_K^e$, if $v_K(f,g) \geq m$, then there is a $K$-isomorphism $L \cong M_g$. \end{itemize} \end{quote}
Let $u_{L/K}$ be the upper numbering ramification break of $L/K$. Then we have the following:
\begin{proposition}\label{KrasnerLemmaRenewB} $\mathrm{(i)}$ If $m > u_{L/K}$, then $(\mathrm{T}^e_m)$ is true.
\noindent $\mathrm{(ii)}$ If $m \leq u_{L/K} - e^{-1}$, then $(\mathrm{T}^e_m)$ is not true. \end{proposition}
\begin{proof} (i) By assumption, we have $v_K(f,g)=v_K(f(\pi_g)) \geq m > u_{L/K}$. According to Lemma \ref{Herbrand}, we have \[ \sup_{\sigma \in G} v_K(\pi_g - \sigma (\pi_L)) = \widetilde{\psi}_{L/K}(v_K(f(\pi_g))) > \widetilde{\psi}_{L/K}(u_{L/K}) = i_{L/K}. \] Hence there exists $\sigma_0 \in G$ such that \[ v_K(\pi_g - \sigma_0 (\pi_L)) > i_{L/K} = \sup_{\sigma \not= 1} v_K(\sigma (\pi_L) -\pi_L) =\sup_{\sigma \not= 1} v_K(\sigma (\sigma_0 (\pi_L)) - \sigma_0 (\pi_L)). \] By Krasner's lemma, we have $L \cong K(\sigma_0 (\pi_L)) \subset K(\pi_g)=M_g$.
\noindent (ii) This follows from Lemma \ref{Example} below immediately. \end{proof}
\begin{lemma}\label{Example} Let $g \in E_K^e$. If $v_K(f,g) = u_{L/K} - e^{-1}$, then we have $L \not\cong M_g$. \end{lemma}
\begin{proof} By assumption, we have $v_K(f(\pi_L)) = v_K(f,g) = u_{L/K} - e^{-1}$. By Lemma \ref{Herbrand}, we have \[ \sup_{\sigma \in G} v_K(\pi_g - \sigma (\pi_L)) = \widetilde{\psi}_{L/K}(u_{L/K} - e^{-1}) = i_{L/K} - \frac{1}{ed}, \] where $d:=\sharp G^{(u_{L/K})}$. By multiplying $e$ with the above equation, we have \[ v_L(\pi_g - \sigma_0 \pi_L) = e i_{L/K} - \frac{1}{d}, \] for some $\sigma_0 \in G$. If we suppose $L = M_g$, then LHS is an integer. However, RHS is never an integer. This is a contradiction. Hence we have $L \not= M_g$. \end{proof}
\section{Main Theorem}\label{MainTheorem}
In this section, we state our main theorem and its consequences. Let $L/K$ be a finite totally ramified Galois extension of degree $e$.
\begin{theorem}\label{MainTheoremB} The property $(\mathrm{T}^e_m)$ for $m=u_{L/K}$ is equivalent to the condition $\mathrm{Hom}_{\mathrm{cont}}(G_k,G_{(i_{L/K})})=1$. \end{theorem}
\begin{corollary}\label{quasi-finite} Assume $k$ is a quasi-finite field. Then $(\mathrm{T}^e_m)$ is not true for $m=u_{L/K}$. \end{corollary}
\begin{proof} By assumption, we have \[ \mathrm{Hom}_{\mathrm{cont}}(G_k,G_{(i_{L/K})})=\mathrm{Hom}_{\mathrm{cont}}(\hat{\mathbb{Z}},G_{(i_{L/K})}) \cong G_{(i_{L/K})} \not= 1. \] We obtain the desired result by Theorem \ref{MainTheoremB}. \end{proof}
Theorem A is a consequence of the following:
\begin{corollary}\label{CorMainTheoremB} If $L/K$ is tamely ramified, then $(\mathrm{T}^e_m)$ for $m=u_{L/K}$ is equivalent to the condition $k^{\times}/(k^{\times})^e = 1$. If $L/K$ wildly ramified, then $(\mathrm{T}^e_m)$ for $m=u_{L/K}$ is equivalent to the condition $k/\wp(k)=0$, where $\wp(X):=X^p - X$. \end{corollary}
\begin{proof} Assume $L/K$ is tamely ramified. Then the group $G=G_{(i_{L/K})}$ is isomorphic to a finite cyclic group $\mu_e$ of order $e$. Note that $k$ contains the $e$th roots of unity. Hence we have $\mathrm{Hom}_{\mathrm{cont}}(G_k,\mu_e)$ $\cong$ $k^{\times}/(k^{\times})^e$ by Kummer theory. It follows the desired result by Theorem \ref{MainTheoremB}. Assume $L/K$ is wildly ramified. Then we have $G_{(i_{L/K})} \cong \oplus\ \mathbb{Z}/p \mathbb{Z}$. Therefore, we obtain $\mathrm{Hom}_{\mathrm{cont}}(G_k,G_{(i_{L/K})})$ $\cong$ $\oplus\ k/\wp(k)$ by Artin-Schreier theory. By Theorem \ref{MainTheoremB}, the proof completes. \end{proof}
\section{Proof of the main theorem}\label{ProofMainTheorem}
\subsection{Reduction to the abelian case}\label{Reduction}
In this subsection, we reduce the proof of Theorem \ref{MainTheoremB} to the case where $L/K$ has only one ramification break so that $L/K$ is abelian. To complete this, we consider the property $(\mathrm{P}^e_m)$ below. Let $L/K$ be a finite Galois totally ramified extension of degree $e$.
\begin{quote} \begin{itemize} \item[$(\mathrm{P}^e_m)$] \emph{For any finite totally ramified extension $M/K$ of degree $e$, if there exists an $\mathcal{O}_K$-algebra homomorphism $\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$, then there exists a $K$-isomorphism $L \cong M$}. \end{itemize} \end{quote}
\begin{proposition}\label{Tem=Pem} The property $(\mathrm{T}^e_m)$ is equivalent to $(\mathrm{P}^e_m)$ for any real number $m>1$. \end{proposition}
\begin{proof} Let $L/K$ be a finite Galois totally ramified extension of degree $e$. Take a uniformizer $\pi_L$ of $L$ and $f \in E_K^e$ its minimal polynomial over $K$. Assume that $(\mathrm{T}^e_m)$ is true for $f$ and $m>1$. Then we show that $(\mathrm{P}^e_m)$ is also true for $L/K$ and $m$. Suppose there exists an $\mathcal{O}_K$-algebra homomorphism $\eta:\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$ for a totally ramified extension $M$ of $K$ of degree $e$. By Lemma \ref{hominjective} below, we have $v_K(\beta)=1/e$, where $\beta$ is a lift of $\eta(\pi_L)$ in $\mathcal{O}_M$. Hence $\beta$ is a uniformizer of $M$. Take the minimal polynomial $g \in E_K^e$ of $\beta$ over $K$. By the well-definedness of $\eta$, we have $v_K(f,g) = v_K(f(\beta)) \geq m$. Since the property $(\mathrm{T}^e_m)$ is true for $f$ and $m$, we have $L = M_g \cong M$.
Conversely, we assume that $(\mathrm{P}^e_m)$ is true for $L/K$ and $m>1$. Then we show that $(\mathrm{T}^e_m)$ is also true for $f$ and $m$. Suppose $v_K(f,g) \geq m$ for an element $g \in E_K^e$. Note that $v_K(f(\pi_g)) = v_K(f,g) \geq m$, where $\pi_g$ is a root of $g$. Then the map $\mathcal{O}_L \to \mathcal{O}_{M_g}/\mathfrak{a}_{M_g/K}^m$ defined by $\pi_L \mapsto \pi_g$ is an $\mathcal{O}_K$-algebra homomorphism. Since $(\mathrm{P}^e_m)$ is true for $L/K$ and $m$, we have $L \cong M_g$. \end{proof}
\begin{lemma}\label{hominjective} Let $L/K$ be a finite totally ramified extension of degree $e$, $\pi_L$ a uniformizer of $L$ and $m>1$ a real number. Assume there exists an $\mathcal{O}_K$-algebra homomorphism $\eta:\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$ for an algebraic extension $M$ of $K$. Then we have $v_K(\beta)=1/e$, where $\beta$ is any lift of $\eta(\pi_L)$. \end{lemma}
\begin{proof} Assume there exists an $\mathcal{O}_K$-algebra homomorphism $\eta:\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$. Put $u=\pi_L^e/\pi_K$. Then $u$ is a unit, so that $\eta(u)$ is also. Since $\beta^e \equiv \eta(u) \pi_K$ in $\mathcal{O}_M/\mathfrak{a}_{M/K}^m$ and $m > 1$, we have $v_K(\beta^e)=v_K(\pi_K)=1$. Hence $v_K(\beta)=1/e$. \end{proof}
\begin{proposition}\label{reduce} Let $L$ be a finite Galois totally ramified extension of $K$ of degree $e$ and $K'$ the fixed field of $L$ by $H:=G^{(u_{L/K})}$. Take a uniformizer $\pi_L$ of $L$. Let $f$ $\mathrm{(}$resp.\ $f'\mathrm{)}$ be the minimal polynomial of $\pi_L$ over $K$ $\mathrm{(}$resp.\ $K'\mathrm{)}$. Then the property $(\mathrm{T}^e_m)$ is true for $f$ and $m=u_{L/K}$ if and only if $(\mathrm{T}_m^{e'})$ is true for $f'$ and $m=u_{L/K'}$, where $e'$ is the ramification index of $L/K'$. \end{proposition}
\begin{proof} If $L/K$ is tamely ramified, then we have $K=K'$. Hence there is nothing to prove. Thus we may assume $L/K$ is wildly ramified, so that $m=u_{L/K} > 1$. By definition, $L/K'$ is also wildly ramified, so that $m=u_{L/K'}>1$. By Proposition \ref{Tem=Pem}, $(\mathrm{T}^e_m)$ for $f$ and $m=u_{L/K}$ is equivalent to $(\mathrm{P}^e_m)$ for $L/K$ and $m=u_{L/K}$. Similarly, $(\mathrm{T}_m^{e'})$ for $f'$ and $m=u_{L/K'}$ is equivalent to $(\mathrm{P}_m^{e'})$ for $L/K'$ and $m=u_{L/K'}$. Thus it is enough to prove that $(\mathrm{P}^e_m)$ for $L/K$ and $m=u_{L/K}$ is equivalent to $(\mathrm{P}_m^{e'})$ for $L/K'$ and $m=u_{L/K'}$. This is a direct consequence of the following lemma:
\begin{lemma}[\cite{Suzuki}, Lem.\ 2.2]\label{homequivalence} Let $L$ and $K'$ be as in Proposition \ref{reduce}. Let $M$ be an algebraic extension of $K$. The following conditions are equivalent:
\noindent $\mathrm{(i)}$ There exists an $\mathcal{O}_K$-algebra homomorphism $\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^{u_{L/K}}$.
\noindent $\mathrm{(ii)}$ The field $K'$ is contained in $M$, and there exists an $\mathcal{O}_{K'}$-algebra homomorphism $\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K'}^{u_{L/K'}}$. \end{lemma}
\end{proof}
\subsection{The proof of the abelian case}\label{ProofAbelCase}
In this subsection, we complete the proof of Theorem \ref{MainTheoremB}. It suffices to show the abelian case by Proposition \ref{reduce}. Then the break $u_{L/K}$ is an integer by the Hasse-Arf theorem (\cite{Serre79}, Chap.\ V, Sect.\ 7, Thm.\ 1). Therefore, it suffices to prove the integer break case:
\begin{proposition}\label{ProofAbelianCase} Assume $u_{L/K}$ is an integer. Then Theorem \ref{MainTheoremB} is true. \end{proposition} \begin{proof} Put $m=u_{L/K}$. Let $L/K$ be a finite Galois totally ramified extension of degree $e$ such that $u_{L/K}$ is an integer. Take a uniformizer $\pi_L$ of $L$ and its minimal polynomial $f$ over $K$. Let $g \in E_K^e$. Put $M_g=K(\pi_g)$, where $\pi_g$ is a root of $g$. We write $f=X^e + a_{e-1} + \cdots + a_0$ and $g=X^e + b_{e-1} + \cdots + b_0$. We want to show that if $v_K(f,g) = m$, then the equality $L=M_g$ is equivalent to the condition $\mathrm{Hom}_{\mathrm{cont}}(G_k,G_{(i_{L/K})})=1$.
First, we prove that it suffices to consider $g$ of the following form by replacing $f$ with suitable one: \[ g= g_u := X^e + a_{e-1}X^{e-1} + \cdots + a_1 X + u a_0, \] where $u$ is an element of $U_K^{m-1} \setminus U_K^m$. By Lemma \ref{DistanceEisenstein} and the assumption that $u_{L/K}$ is an integer, we have \[ v_K(f(\pi_g)) = \min_{0 \leq i \leq e-1} \{ v_K(a_i - b_i) + \frac{i}{e} \} = v_K(a_0 - b_0) = m. \] Thus we have $b_0 = u a_0$ for some $u \in U_K^{m-1} \setminus U_K^m$. Let $f_0 := X^e + b_{e-1} X^{e-1} + \cdots + b_1 X + a_0$. Note that $v_K(f,f_0) = \min_{1 \leq i \leq e-1} \{v_K(a_i - b_i) + i/e \} > m$. According to Proposition \ref{KrasnerLemmaRenewB}, the extension defined by $f_0$ coincides with $L$. By replacing $f$ with $f_0$, we reduce the problem to the desired situation.
Second, we show that $L=M_u$ for any $u \in U_K^{m-1} \setminus U_K^m$ if and only if the map $N_{m-1}$ is surjective, where $M_u/K$ is the extension defined by $g_u$ and $N_{m-1}$ is the norm map defined in Section \ref{Ramification}. Assume $L=M_u$ for any $u \in U_K^{m-1} \setminus U_K^m$. By Lemma \ref{Herbrand}, we have $v_K(\sigma_0 (\pi_L) - \pi_{M_u}) = i_{L/K}$ for some $\sigma_0 \in G$. Take $u':=\pi_{M_u}/\sigma_0 (\pi_{L}) \in L=M_u$. Note that $v_L(1-u') = ei_{L/K}-1$ by the equality $v_K(\sigma_0 (\pi_L) - u' \sigma_0 (\pi_L)) = i_{L/K}$. and $\psi(m-1) = ei_{L/K}-1$ by Remark \ref{Numbering}. Thus we have $u' \in U_L^{\psi(m-1)} \setminus U_L^{\psi(m-1)+1}$. Moreover, we have $u a_0 = N(\pi_{M_u}) = N(u' \sigma (\pi_L)) = N(u') N(\sigma (\pi_L)) = N(u') a_0$, so that we have $N(u') = u$. Conversely, we assume that the map $N_{m-1}$ is surjective. Then there exists an element $u'$ of $U_L^{\psi(m-1)} \setminus U_L^{\psi(m-1)+1}$ such that $N(u') = u$. Note that $\pi_L' := u' \pi_L$ is a uniformizer of $L$ and $v_K(\pi_L' - \pi_L) = i_{L/K}$. Let $f'$ be the minimal polynomial of $\pi_L'$ over $K$. By Lemma \ref{CalculationUnit} below, we have $v_K(\pi_L' - \pi_L) = \sup_{\sigma \in G} v_K(\pi_L' - \sigma (\pi_L))$. According to Lemma \ref{Herbrand}, we have \[ \begin{split} v_K(f,f') &= v_K(f(\pi_L')) \\ &=\widetilde{\varphi}_{L/K}(\sup_{\sigma \in G} v_K(\pi'_L - \sigma(\pi_L))) = \widetilde{\varphi}_{L/K}(v_K(\pi_L' - \pi_L ))\\ &= \widetilde{\varphi}_{L/K}(i_{L/K}) = u_{L/K}=m. \end{split} \] By the ultrametric inequality, we have \[ v_K(g_u,f') \geq \min \{ v_K(f,g_u),v_K(f,f') \} = m. \] The constant term of $f'$ is the same as the one of $g_u$. Then we have $v_K(g_u,f') \not= m$ by Lemma \ref{DistanceEisenstein} and that $m$ is an integer. Thus we have $v_K(g_u,f') > m$. According to Lemma \ref{Herbrand}, we have $L = M_u$. Therefore, we reduce the truth of $(\mathrm{T}^e_m)$ for $m=u_{L/K}$ to the surjectivity of the map $N_{m-1}$. Thus it is enough to prove $\mathrm{Coker}(N_{m-1}) \cong \mathrm{Hom}_{\mathrm{cont}}(G_k,G_{(i_{L/K})})$. It follows from Lemma \ref{CokerNorm} below immediately as $n=m-1$. \end{proof}
\begin{remark} In the proof of Theorem \ref{ProofAbelianCase}, we do not require the assumption that $L/K$ is abelian. We need only the assumption that $u_{L/K}$ is an integer. \end{remark}
\begin{lemma}\label{CalculationUnit} We have \[ v_K(\pi_L' - \sigma (\pi_L)) \begin{cases} < i_{L/K} & \text{$\sigma \in G \setminus G_{(i_{L/K})}$} \\ = i_{L/K} & \text{$\sigma \in G_{(i_{L/K})}$}. \end{cases} \] Therefore, we have $v_K(\pi_L' - \pi_L) = i_{L/K} = \sup_{\sigma \in G} v_K(\pi_L' - \sigma (\pi_L))$. \end{lemma}
\begin{proof} Suppose $\sigma \not\in G_{(i_{L/K})}$. Then we have $v_K(\pi_L - \sigma (\pi_L)) < i_{L/K}$. Hence we have \[ \begin{split} v_K(\pi_L' - \sigma (\pi_L)) & = v_K(\pi_L' - \pi_L + \pi_L - \sigma (\pi_L)) \\ & = \min \{ v_K(\pi_L' - \pi_L),v_K(\pi_L - \sigma (\pi_L)) \} \\ & = \min \{i_{L/K},v_K(\pi_L - \sigma (\pi_L)) \} < i_{L/K}. \end{split} \] Next, we consider the case $\sigma \in G_{(i_{L/K})}$. Then we have $v_K(\pi_L - \sigma (\pi_L)) = i_{L/K}$ so that $\pi_L/\sigma (\pi_L) \in U_L^{ei_{L/K}-1}$. Note that \[ v_K(\pi_L' - \sigma (\pi_L)) = v_K(u' \frac{\pi_L}{\sigma (\pi_L)} - 1) + \frac{1}{e}. \] To prove that RHS is equal to $i_{L/K}$, we show $u' \cdot \pi_L/\sigma (\pi_L) \in U_L^{ei_{L/K}-1} \setminus U_L^{ei_{L/K}}$. This is equivalent to $u' \cdot \pi_L/\sigma (\pi_L) \not\equiv 1$ in $U_L^{ei_{L/K}-1}/U_L^{ei_{L/K}}$. To prove this, it suffices to show $N(u' \cdot \pi_L/\sigma (\pi_L)) \not\equiv 1$ in $U_K^{m-1}/U_K^m$ by considering the norm map $N_{m-1}:U_L^{\psi(m-1)}/U_L^{\psi(m-1)+1} \to U_K^{m-1}/U_K^m$ with $\psi(m-1)=ei_{L/K}-1$. Since $N(u') \not\equiv 1 \pmod{U_K^m}$ and $N(\pi_L/\sigma (\pi_L))=1$, we have $N(u' \cdot \pi_L/\sigma (\pi_L)) = N(u') N(\pi_L/\sigma (\pi_L)) = N(u') \not\equiv 1 \pmod{U_K^m}$. \end{proof}
\begin{lemma}\label{CokerNorm} Let $L$ be a totally ramified Galois extension of $K$ and $n$ be an integer $\geq 0$. Then we have \[ \mathrm{Coker}(N_n) \cong \mathrm{Hom}_{\mathrm{cont}}(G_k,G_{\psi(n)}/G_{\psi(n)+1}). \] \end{lemma}
\begin{proof} Let $K_0$ (resp.\ $L_0$) be the completion of the maximal unramified extension of $K$ (resp.\ $L$). Apply Proposition \ref{exact} to $L_0/K_0$. Then the sequence \[ 0 \to G_{\psi(n)}/G_{\psi(n)+1} \to U_{L_0}^{\psi(n)}/U_{L_0}^{\psi(n)+1} \to U_{K_0}^{n}/U_{K_0}^{n+1} \to 0 \] is exact. The Galois group $G_k$ acts on $L_0$ and $K_0$ continuously. Define the action of $G_k$ on $G_{n}/G_{n+1}$ by the trivial action. Since $L/K$ is totally ramified, the action of $G$ on $L_0$ is compatible with $G_k$-action. Thus the above sequence is exact as continuous $G_k$-modules. Writing out the corresponding exact cohomology sequence, and taking into account that $H^1_{\mathrm{cont}}(G_k,\bar{k})=0$, we obtain \[ k \to k \to H^1_{\mathrm{cont}}(G_k,G_{\psi(n)}/G_{\psi(n)+1}) \to 0. \] Consequently, we have $\mathrm{Coker}(N_{n})$ $\cong$ $H^1_{\mathrm{cont}}(G_k,G_{\psi(n)}/G_{\psi(n)+1})$. Since $G_k$ acts on $G_{\psi(n)}/G_{\psi(n)+1}$ trivially, this is equal to $\mathrm{Hom}_{\mathrm{cont}}(G_k,G_{\psi(n)}/G_{\psi(n)+1})$. Hence the result follows. \end{proof}
\begin{remark} This lemma is a generalization of \cite{Serre79}, Chapter XV, Section 2, Proposition 3. \end{remark}
\section{Appendix}\label{ThePropertiesPmTem}
\quad Throughout this appendix, we assume that $k$ is perfect. We consider a property $(\mathrm{P}'_m)$, which is similar to $(\mathrm{T}^e_m)$. We completely determine the truth of $(\mathrm{P}'_m)$ by showing that $(\mathrm{P}'_m)$ is equivalent to $(\mathrm{P}_m)$. Let $L$ be a finite Galois extension of $K$. Take an element $\alpha$ of $\mathcal{O}_L$ such that $\mathcal{O}_L=\mathcal{O}_K[\alpha]$. Let $f$ be the minimal polynomial of $\alpha$ over $K$ and $P_K$ the set of all monic irreducible polynomials over $K$. For any $g \in P_K$, we put $M_g=K(\beta)$, where $\beta$ is a root of $g$. Consider the following property for any real number $m \geq 0$:
\begin{quote} \begin{itemize} \item[$(\mathrm{P}'_m)$] For any $g \in P_K$, if $v_K(\mathrm{Res}(f,g)) \geq \mathrm{deg}(g) m$, then there exists a $K$-embedding $L \hookrightarrow M_g$. \end{itemize} \end{quote}
If $f$ and $g$ are contained in $E_K^e$, then $v_K(\mathrm{Res}(f,g)) \geq \mathrm{deg}(g) m$ is equivalent to $v_K(f,g) \geq m$. Hence the property $(\mathrm{P}'_m)$ is stronger than $(\mathrm{T}^e_m)$.
For a finite Galois extension $L/K$ and real numbers $m \geq 0$, we consider the following property:
\begin{quote} \begin{itemize} \item[$(\mathrm{P}_m)$] \emph{For any algebraic extension} $M/K$,\ \emph{if there exists an} $\mathcal{O}_K$-\emph{algebra homomorphism} $\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$,\ \emph{then there exists a} $K$-\emph{embedding} $L \hookrightarrow M$. \end{itemize} \end{quote}
Fontaine proved the following:
\begin{proposition}[\cite{Fontaine85}, Prop.\ 1.5]\label{Fontaine} Let $L$ be a finite Galois extension of $K$ and $e$ the ramification index of $L/K$. Then there are following relations:
\noindent $\mathrm{(i)}$ If $m>u_{L/K}$, then $(\mathrm{P}_m)$ is true.
\noindent $\mathrm{(ii)}$ If $m\ leq u_{L/K}-e^{-1}$, then $(\mathrm{P}_m)$ is not true. \end{proposition}
The author proved the following:
\begin{proposition}[\cite{Yoshida09}, Prop.\ 3.4]\label{Yoshida} Let $L$ be a finite Galois extension of $K$. If $m < u_{L/K}$, then $(\mathrm{P}_m)$ is not true. \end{proposition}
As a similar result of our main theorem, the truth of $(\mathrm{P}_m)$ at the ramification break depends on the ramification of $L/K$ and the residue field $k$:
\begin{proposition}[\cite{Suzuki}, Thm.\ 1.1]\label{PmBreak} Let $L$ be a finite Galois wildly ramified extension of $K$. Then the property $(\mathrm{P}_m)$ is true for $m=u_{L/K}$ if and only if $k$ has no Galois extension whose degree is divisible by $p$. \end{proposition}
\begin{remark} If $L/K$ is at most tamely ramified, then $(\mathrm{P}_m)$ is not true for $m=u_{L/K}$. This is shown in the proof of Proposition 3.3, \cite{Yoshida09}. \end{remark}
In fact, we have the following:
\begin{proposition}\label{Equivalence} For any real number $m \geq 0$, $(\mathrm{P}'_m)$ is equivalent to $(\mathrm{P}_m)$. \end{proposition}
\begin{proof} Let $L$ be a finite Galois extension of $K$. Choose an element $\alpha \in \mathcal{O}_L$ such that $\mathcal{O}_L=\mathcal{O}_K[\alpha]$. Let $f$ be the minimal polynomial of $\alpha$ over $K$. Assume that $(\mathrm{P}'_m)$ is true for $f$ and $m$. Then we show that $(\mathrm{P}_m)$ is also true for $L/K$ and $m$. Suppose there exists an $\mathcal{O}_K$-algebra homomorphism $\eta:\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$ for an algebraic extension $M$ of $K$. Let $\beta$ be a lift of $\eta(\alpha)$ in $\mathcal{O}_M$. Then we have $v_K(f(\beta)) \geq m$. Let $g$ be the minimal polynomial of $\beta$ over $K$. Then we have $v_K(\mathrm{Res}(f,g)) = \mathrm{deg}(g) v_K(f(\beta)) \geq \mathrm{deg}(g)m$. By the property $(\mathrm{P}'_m)$, there exists a $K$-embedding $L \hookrightarrow K(\beta) \subset M$. Conversely, assume that $(\mathrm{P}_m)$ is true for $L/K$ and $m$. Then we show that $(\mathrm{P}'_m)$ is also true for $L/K$ and $m$. Suppose $v_K(\mathrm{Res}(f,g)) \geq \mathrm{deg}(g)m$ for an arbitrary polynomial $g \in P_K$. Then we have $v_K(f(\beta)) \geq m$, where $\beta$ is a root of $g$. Put $M=K(\beta)$. The map $\mathcal{O}_L \to \mathcal{O}_M/\mathfrak{a}_{M/K}^m$ defined by $\alpha \mapsto \beta$ is an $\mathcal{O}_K$-algebra homomorphism. By the property $(\mathrm{P}_m)$, there exists a $K$-embedding $L \hookrightarrow M$. \end{proof}
Therefore, we have the following two consequences from Propositions \ref{Fontaine}, \ref{Yoshida} and \ref{PmBreak}:
\begin{corollary}\label{KnownResults} Let $L$ be a finite Galois extension of $K$. Then the property $(\mathrm{P}'_m)$ is true for $m > u_{L/K}$, and is not true for $m < u_{L/K}$. \end{corollary}
\begin{corollary}\label{PPmBreak} Let $L$ be a finite Galois wildly ramified extension of $K$. Then the property $(\mathrm{P}'_m)$ is true for $m=u_{L/K}$ if and only if $k$ has no Galois extension whose degree is divisible by $p$. \end{corollary}
\end{document} | arXiv |
Evaluation of maxout activations in deep learning across several big data domains
Gabriel Castaneda ORCID: orcid.org/0000-0002-4307-30451,
Paul Morris1 &
Taghi M. Khoshgoftaar1
This study investigates the effectiveness of multiple maxout activation function variants on 18 datasets using Convolutional Neural Networks. A network with maxout activation has a higher number of trainable parameters compared to networks with traditional activation functions. However, it is not clear if the activation function itself or the increase in the number of trainable parameters is responsible in yielding the best performance for different entity recognition tasks. This paper investigates if an increase in the number of convolutional filters on traditional activation functions performs equal-to or better-than maxout networks. Our experiments compare the Rectified Linear Unit, Leaky Rectified Linear Unit, Scaled Exponential Linear Unit, and Hyperbolic Tangent activations to four maxout function variants. We observe that maxout networks train relatively slower than networks with traditional activation functions, e.g. Rectified Linear Unit. In addition, we found that on average, across all datasets, the Rectified Linear Unit activation function performs better than any maxout activation when the number of convolutional filters is increased. Furthermore, adding more filters enhances the classification accuracy of the Rectified Linear Unit networks, without adversely affecting their advantage over maxout activations with respect to network-training speed.
Deep networks have become very useful for many computer vision applications. Deep neural networks (DNNs) are models composed of multiple layers that transform input data to outputs while learning increasingly higher-level features. Deep learning relies on learning several levels of hierarchical representations for data. Due to their hierarchical structure, the parameters of a DNN can generally be tuned to approximate target functions more effectively than parameters in a shallow model [1]. Today, the typical number of network layers used in deep learning range from five to more than a thousand [2].
Activation functions are used in neural networks (NN) to transform the weighted sum of input and biases, of which is used to decide if a neuron can be fired or not [3]. Commonly used activation functions (nonlinearities) include sigmoid, Hyperbolic Tangent (tanh) and Rectified Linear Unit (ReLU) [4]. The use of ReLU was a breakthrough that enabled the fully supervised training of state-of-the-art DNNs [5]. Compared to traditional activation functions, like the logistic sigmoid units or tanh units, which are anti-symmetric, ReLU is one-sided. This property encourages the hidden units to be sparse, and thus more biologically plausible [6]. Because of its simplicity and effectiveness, ReLU became the default activation function used across the deep learning community [7]. A Convolutional Neural Network (CNN) using ReLU as its activation function classified 1.2 million images of the ImageNet dataset into 100 classes with an error rate of 37.5% [5]. The deep network implemented by Severyn and Moschitti [8] using ReLU as the activation function demonstrated state-of-the-art performance at both the phrase-level and message-level for Twitter sentiment analysis. At Semeval-2015 (International Workshop on Semantic Evaluation), Severyn and Moschitti's models ranked first in the phrase-level subtask A and second in the message-level subtask B.
The ReLU function saturates when inputs are negative. These saturation regions cause gradient diffusion and block gradients from propagating to deeper layers [9]. Furthermore, ReLUs can die out during learning, consequently blocking error gradients and learning nothing [10]. For these reasons, different activation functions have been proposed for DNN training. There is a lack of consensus on how to select a good activation function for a DNN, and a specific function may not be suitable for all applications. Since an activation function is generally applied to the outputs of all neurons, its computational complexity will contribute heavily to the overall execution time [11]. Most research works on the activation functions are focused on the complexity of the nonlinearity that an activation function can provide [12], or how fast it can be executed [13], but often neglect the impact on different classification tasks.
The maxout nonlinearity [14] selects the maximum value within a group of different outputs (feature maps) and is usually combined with dropout [15], which is widely used to regularize deep networks to prevent overfitting. In NNs, the maxout activation takes the maximum value of the pre-activations. Figure 1 shows two pre-activations per maxout unit, each of these pre-activations has a different set of weights from the inputs denoted as "V". Each hidden unit takes the maximum value over the j units of a group: \(h_{i} = max_{j}\;Z_{ij}\) where Z is the lineal pre-activation value, i is the number of maxout units and j the number of pre-activation values. Maxout chooses the maximum of n input features to produce each output feature in a network, the simplest case of maxout is the Max-Feature-Map (MFM) [16], where n = 2. The MFM maxout computes the function \(\text{max} (w_{1}^{T} x + b_{1} , w_{2}^{T} + b_{2} )\), and both the ReLU and leaky ReLU are a special case of this form. When specific weight values w1, b1, w2 and b2 of the MFM inputs are learned, MFM can emulate ReLU and other rectified linear variants. The maxout unit is helpful for tackling the problem of vanishing gradients because the gradient can flow through every maxout unit [17].
Two maxout units
Figure 2 shows the maxout unit in a CNN architecture, where x is a 10 × 10 pixel image. The maxout unit takes the maximum value of the convolution operations y1 and y2. The CNN learns the weights and bias in the filters F1 and F2. Dropout randomly drops units or connections to prevent units from overfitting. It has been shown to improve classification accuracy in some computer vision tasks [18]. Park and Kwak [19] observed that dropout in CNNs regularizes the networks by adding noise to the output feature maps of each layer, yielding robustness to variations of images. In 2015, the Maxout network In Network (MIN) [17] method achieves in 2015 state-of-the-art or comparable performance on the Mixed National Institute of Standards and Technology (MNIST) [20], the Canadian Institute for Advanced Research (CIFAR-10), CIFAR-100 [21], and Street View House Numbers (SVHN) [22] datasets. Maxout layers were also applied in sentiment analysis [23], with a hybrid architecture consisting of a recurrent neural network stacked on top of a CNN. This approach outperforms a standard convolutional deep neural architecture as well as a recurrent network architecture and performs competitively compared to other methods on two datasets of annotated customer reviews.
A CNN maxout unit
CNNs were originally intended for computer vision tasks, being inspired by connections in the visual cortex; however, they have been successfully applied to several DNN acoustic models [24,25,26] and natural language processing tasks [27, 28]. CNNs are designed to process input features which show local spatial correlations. They can also handle the local translational variance of their input, which makes the network more tolerant to slight position shifts [29].
The benefit of frequency domain convolution with acoustic models is that CNNs' tolerance to input translations is useful for modeling acoustic data because acoustic models which use convolutions on the frequency domain are robust to speaker and speaking style variations. This is confirmed by studies that experimented with frequency domain convolution. CNNs consistently outperform fully-connected DNNs on the same task [30,31,32]. When CNNs are used for sentiment analysis, the first layer of the network converts words in sentences to word vectors by table lookup. The word vectors are either trained as part of CNN training, or fixed to those learned by some other method (e.g., word2vec [33]) from an additional large corpus [34]. When working with sequences of words, convolutions allow the extraction of local features around each word.
Most of the comparisons between maxout and other activation functions only report a single performance metric, ignore network size, and only report accuracy on a single dataset, with no training time or memory use analysis. Furthermore, when compared with other activation functions, it is unclear whether marginal performance gains with maxout are due to the activation function or an increase in the number of required trainable parameters. In this work, we evaluated multiple activation functions applied to multiple domains:
Visual pattern recognition
Facial verification
Medical fraud detection
Sound recognition
Speech commands recognition.
To the best of our knowledge, this is the first study to evaluate multiple maxout variants and standard activations for multiple domains with significance testing. The main contributions herein can be summarized as follows:
Evaluate four maxout functions and compare them to popular activation functions like tanh, ReLU, LReLU and SeLU.
Compare training times for various activation functions.
Evaluate whether marginal performance gains with maxout are due to the activation function or simply an increase in the number of trainable parameters versus ReLU networks.
Determine whether maxout methods converge faster and if there is a significant accuracy performance difference between these methods and the standard activations.
The remainder of this paper is organized as follows. The "Related work" section presents related work on activation function evaluation on multiple classification domains. The "Materials and methods" section introduces the activation functions, datasets, and the experimental methodology employed in our experiments. Results and analysis are provided in "Experimental results and discussion" section. Conclusions with some directions for future work are provided in the "Conclusion" section.
Maxout is employed as part of deep learning architectures and has been tested against the MNIST, CIFAR-10 and CIFAR-100 benchmark datasets, but it has not been compared against other activation functions using the same deep network architecture and hyperparameters. It is not clear if maxout enhances the overall accuracy on the tested datasets, or if any other activation function has the same effect. There are a few comparisons between maxout and traditional activation functions. Most of the comparisons do not report the details of their network to indicate whether an increased number of filters was accounted for in the experiment.
Most prior work focuses on proposing new activation functions, but few studies have compared different activation functions. Xu et al. [35] investigated the performance of ReLU, leaky ReLU [36], parametric ReLU [37], and the proposed randomized leaky ReLU (RReLU) on three small datasets. In RReLU, the slopes of negative parts are randomized in a given range in the training, and then fixed in the testing. The original ReLU was outperformed by three types of modified leaky ReLU. Mishkin et al. [38] evaluated the influence of activation functions (including ReLU, maxout, and tanh), pooling variants, network width, classifier design, image pre-processing, and learning parameters on the ImageNet dataset. The experiments confirmed the Swietojanski et al. [39] hypothesis about maxout's power in the final layers, as it showed the best performance among non-linear activation functions with speed close to ReLU. The bounded ReLU (brelu), bounded leaky ReLU (blrelu), and bounded bi-firing (bbifire) were presented in [11], and evaluated on classification of basic handwritten digits in MNIST database, complex handwritten digits from the mnist-rot-bg-img database, and facial recognition using the AR Purdue database. Experimental results for all three datasets demonstrate the superiority of all the proposed activation functions, with significant accuracy improvements up to 17.31%, 9.19%, and 74.99% on MNIST, mnist-rot-bg-img, and AR Purdue databases respectively. In [7], automated search techniques were used to discover novel activation functions. The activation function that tends to work better than ReLU on deeper models across three datasets was \(h\left( x \right) = x \cdot sigmoid\left( {\beta x} \right)\) named Swish, where \(\beta\) is either a constant or a trainable parameter. Only scalar activation functions were used in this study, this limitation would not allow the authors to find or evaluate the maxout activation.
Chang and Chen [17] presented the MIN network. It recorded 0.24%, 6.75% and 28.86% error rates on the MNIST, CIFAR-10, and CIFAR-100, respectively. These error rates are the lowest compared to Network in Network (NIN) [40] and other NIN based networks such as Maxout Network [14] or the Maxout Network in Maxout Network (MIM) [41]. Oyedotun et al. [42] proposed a deep network with maxout units and elastic net regularization. On the MNIST dataset, it reached an error rate of 0.36% and 2.19% on the USPS dataset, surpassing the human performance error rate of 2.5% and all previously reported results. In [43], the Rectified Hyperbolic Secant (ReSech) activation function was proposed and evaluated on MNIST, CIFAR-10, CIFAR-100, and the Pang and Lee's movie review datasets. The results suggest that ReSech units are expected to produce similar or better results compared to ReLU units for various sentiment prediction tasks. The maxout network accuracy was only compared with the CIFAR-10 and MNIST datasets. Goodfellow et al. [44] investigated the catastrophic forgetting problem, testing four activation functions, including maxout, on MNIST and Amazon using two hidden layers followed by a softmax classification layer. The catastrophic forgetting problem is when a machine learning model is trained on one task, and when trained on a second task it forgets how to perform the first task. Their experiments showed that training with dropout is beneficial, at least on the relatively small datasets used in the paper. Also, the choice of activation function should always be cross-validated, if computationally feasible. Maxout in combination with dropout showed the lowest test errors on all experiments.
The maxout activation is effective in speech recognition tasks [45], but it has not been widely tested on sentiment analysis. Jebbara and Cimiano [23] used the maxout activation in their CNN portion of a hybrid architecture consisting of a recurrent NN stacked on top of a CNN. A maxout layer was also implemented in the Siamese bidirectional Long Short-Term Memory (LSTM) network proposed by Baziotis et al. [46]. The maxout layer was selected as it amplifies the effects of dropout. The output of the maxout layer is connected to a softmax layer which outputs a probability distribution over all classes.
Phoneme recognition tests on the Texas Instruments Massachusetts Institute of Technology (TIMIT) database show that switching to maxout units from rectifier units decreases the error rate for each network configuration studied and yields relative error rate reductions of between 2 and 6% [24]. Zhang et al. [45] introduced two new types of generalized maxout units the p-norm and soft-maxout. In experiments on the Large Vocabulary Continuous Speech Recognition (LVCSR) tasks in various languages, the p-norm units perform consistently better than various versions of maxout, tanh and ReLU. In addition, Swietojanski et al. [39] investigated maxout networks for speech recognition. Through the experiments on voice search and short message dictation datasets, it was found that maxout networks converged around three times faster to train and offer lower or comparable word error rates on several tasks when compared to the networks with logistic nonlinearity. Zhang et al. [47] presented a CNN-based end-to-end speech recognition framework. The maxout unit recorded the lowest error rate compared to ReLU and parametric ReLU.
Using the Public Use File (PUF) data from CMS, Branting et al. [48] proposed graph analysis as a framework for healthcare fraud risk assessment. Their algorithm was evaluated on the Part B (2012–2014), Part D (2013) and List of Excluded Individuals/Entities (LEIE) datasets. Using tenfold cross-validation on the full 12,000-member and 11-feature dataset, the mean f-measure was 0.919 and the mean Receiver Operating Characteristic (ROC) area was 0.960. Sadiq et al. [49] use the 2014 CMS Part B, Part D, and DMEPOS datasets (using only the provider claims from Florida) in order to find anomalies that possibly point to fraudulent or anomalous behavior. A novel framework based on Patient Rule Induction Method (PRIM) was presented, where abnormal behaviors of the physicians are detected. The experimental results show that their framework can effectively shrink the target dataset and deduce a potential suspect subset of physicians who submit several anomalous claims and probably qualify as fraudsters. The attribute sub-space and their correlations are used in PRIM to characterize the low conditional probability region. The attribute space was characterized by PRIM, which provides a deeper understanding of how certain attributes are the key predictors in identifying fraud. Herland et al. [50] focused on the detection of Medicare fraud using the CMS Part B, Part D, and DMEPOS datasets. A fourth dataset was created by combining the three primary datasets. Based on the area under the ROC curve performance metric, their results show that the combined dataset with the Logistic Regression (LR) learner yielded the best overall score at 0.816, closely followed by the Part B dataset with LR at 0.805.
Our study evaluates 11 activation functions using deep CNN and NN architectures. As opposed to the papers cited in this section, we evaluate if an increase in the number of filters in ReLU enhances the overall accuracy with significance testing. Furthermore, we compare the training and convergence time for all the evaluated activation functions.
In this section, we introduce the activation functions, datasets, and the empirical methodology employed in this study. In "Activation functions" section, we introduce each evaluated activation function. In "Datasets" section, we describe the datasets employed in our experiments. In the last "Empirical methodology" section, we present our empirical methodology.
Activation functions
Hyperbolic tangent
A hyperbolic tangent (tanh) function is a ratio between hyperbolic sine and cosine functions of x (Fig. 3):
$$h\left( x \right) = \tanh = \frac{\sinh\left( x \right)}{\cosh \left( x \right)} = \frac{{e^{x} - e^{ - x} }}{{e^{x} + e^{ - x} }} = \frac{{1 - e^{ - 2x} }}{{1 + e^{ - 2x} }}$$
Rectified units
Rectified linear unit (ReLU) [4] is defined as:
$$h\left( x \right) = { \text{max} }\left( {0,x} \right)$$
where \(x\) is the input and \(h\left( x \right)\) is the output. The ReLU activation is the identity for positive arguments and zero otherwise (Fig. 4).
Rectified linear unit
Leaky ReLU (LReLU) [36] assigns a slope to its negative input. It is defined as:
$$h\left( x \right) = \text{min} \left( {0,ax} \right) + { \text{max} }\left( {0,x} \right)$$
where \(a\) ∈ (0, 1) is a predefined slope (Fig. 5).
Leaky rectified linear unit (α = 0.1)
The scaled exponential linear unit (SeLU) [51] is given by:
$$h\left( x \right) = \lambda \left\{ {\begin{array}{l} {x \;\;\;\;\;\;\;\;\;\;\;\;\;if \;x > 0} \\ {\alpha e^{x} - \alpha\;\;\;if \;x \le 0} \\ \end{array} } \right\}$$
where x is used to indicate the input to the activation function. Klambauer et al. [51] justify why \(\alpha\) and \(\lambda\) must have the following values:
$$\begin{array}{*{20}c} { \propto = 1.6732632423543772848170429916717} \\ {\lambda = 1.0507009873554804934193349852946} \\ \end{array}$$
to ensure that the neuron activations converge automatically toward an average of 0 and a variance of 1 (Fig. 6).
Scaled exponential linear unit
Maxout units
The maxout unit takes as the input the output of multiple linear functions and returns the largest:
$$\begin{aligned} h\left( {x_{i} } \right) & = {\text{max }}w^{k} \cdot x_{i} + b^{k} \\ & \quad k \in \left\{ {1, \ldots ,K} \right\} \end{aligned}$$
In theory, maxout can approximate any convex function [14], but a large number of extra parameters introduced by the \(k\) linear functions of each hidden maxout unit result in large RAM storage memory cost and considerable increase in training time, which affect the training efficiency of very deep CNNs. For our comparisons, we use four variants of the maxout activation: an activation with \(k\) = 2 input neurons for every output (maxout 2-1), an activation with \(k\) = 3 input neurons for every output (maxout 3-1), an activation with \(k\) = 6 input neurons for every output (maxout 6-1), and a variant of maxout with \(k\) = 3 where the two maximum neurons are selected (maxout 3-2). These maxout variants have proven to be effective in classification tasks such as image classification [44], facial recognition [16], and speech recognition [18]. The maxout unit in Fig. 7 mimics a quadratic activation function. The blue quadratic function is not created by the maxout unit, it is only pictured to show what the maxout activation function can approximate when using five linear nodes.
Maxout (k = 5)
p-norm
The p-norm is the nonlinearity:
$$h = \left\| {x} \right\|_{p} = \left( {\mathop \sum \limits_{i} \left| {x_{i} } \right|^{p} } \right)^{1/p}$$
where the vector x represents a small group of inputs [45]. If all the \(x_{i}\) were known to be positive, the original maxout would be equivalent to the p-norm with \(p = \infty\) (Fig. 8).
p-norm (p = 2 i = 5)
Logistic sigmoid
The logistic sigmoid is defined as:
$$h\left( x \right) = \frac{1}{{1 + e^{ - x} }}$$
where x is the input. With a range between 0 and 1, the sigmoid function can be used to predict posterior probabilities [52] (Fig. 9).
The Mixed National Institute of Standards and Technology (MNIST) dataset [20] consists of 8-bit grayscale handwritten digit images, 28 × 28 pixels in size, organized into 10 classes (0 to 9) with 60,000 training and 10,000 test samples.
Fashion-MNIST
The Fashion-MNIST [53] dataset consists of 60,000 examples where each sample is a 28 × 28 grayscale image, associated with a label from 10 fashion item classes: T-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot.
CIFAR-10 and 100
The Canadian Institute for Advanced Research (CIFAR)-10 dataset [21] consists of natural color images, 32 × 32 pixels in size, from 10 classes with 50,000 training and 10,000 test images. The CIFAR-100 dataset is the same size and format as the CIFAR-10; however, it contains 100 classes. Thus, the number of images in each class is only one tenth of that of CIFAR-10.
The Labeled Faces in the Wild (LFW) dataset contains more than 13,000 images of faces collected from the web by Huang et al. [54] Each face was labeled with the name of the person pictured, with 1680 of the people pictured having two or more distinct photos in the dataset. Images are 250 × 250 pixels in size. The only constraint on these faces is that they were detected by the Viola-Jones face detector [55]. The database was designed for studying the problem of unconstrained facial recognition.
MS-Celeb-1M
The MS-Celeb-1M dataset released by Microsoft [56] contains 10 million celebrity face images for the top 100K celebrities obtained from public search engines, which can be used to train and evaluate both face identification and verification algorithms. There are approximately 100 images for each celebrity, resulting in about 10 million web images. The image resolution is up to 300 × 300 pixels. The authors present a distribution of the million celebrities in different aspects including profession, nationality, age, and gender. The MS-Celeb-1 M is a larger dataset compared to the other test datasets and requires hyperparameter tuning. To avoid the unfair comparison issues associated with changing hyperparameters, we decided to use a manageable subset of 1000 identities which does not require fine-tuning to train. The identities were selected from a cleaned subset of MS-Celeb-1M used for the low-shot learning challenge. These identities were the top 1000 in a list ordered by the number of images.
The original Amazon product review dataset was collected by McAuley et al. [57]. It contains product reviews and scores from 24 product categories sold on Amazon.com, including 142.8 million reviews spanning from May 1996 to July 2014. Review scores lie on an integer scale from 1 to 5. The sentiment dataset constructed from the Amazon product review data in [58] was reused, where 2,000,000 reviews had a score greater than or equal to 4 stars and 2,000,000 reviews had a score less than or equal to 2 stars. The first group is labeled as positive sentiment while the second group is labeled as negative sentiment, creating a positive/negative sentiment dataset. A second subset here called "Amazon1M" was used with one million Amazon product reviews constructed in [59]. The labels were automatically generated from the star rating of each review by assigning a rating below 2.5 as negative and a rating above 2.5 as positive.
Sentiment140
Sentiment140 [60] contains 1.6 million positive and negative tweets, collected and annotated by querying positive and negative emotions, with a tweet considered positive if it contains a positive emoticon like ":)" and negative if, it contains a negative emoticon like ":(".
We use the sentiment datasets collected in [59]. It contains 429,061 Yelp reviews from 12 cities in the United States (Yelp500K). This is an imbalanced dataset with 371,292 positive and 57,769 negative instances. Another 500K reviews were scraped to create a second dataset with a million reviews (Yelp1M).
The CMS [61] released the Part B dataset [62] and describes Medicare provider claims information for the entire US and its commonwealths, where each instance in the data shows the claims for a provider and procedure performed for a given year. Physicians are identified using their unique National Provider Number (NPI) [63], while procedures are labeled by their Healthcare Common Procedure Coding System (HCPCS) code [64]. Other claims information includes average payments and charges, the number of procedures performed and medical specialty (also known as provider type).
The Part D PUF [65] provides information on prescription drugs prescribed by individual physicians and other health care providers and paid for under the Medical Part D Prescription Drug Program. Each physician is denoted by his or her NPI and each drug is labeled by its brand and generic name. Other information includes average payments and charges, variables describing the drug quantity prescribed and medical specialty.
DMEPOS
The Durable Medical Equipment, Prosthetics, Orthotics and Supplies (DMEPOS) PUF [66] presents information on DMEPOS products and services provided to Medicare beneficiaries ordered by physicians and other healthcare professionals. Physicians are identified using their unique NPI within the data while products are labeled by their HCPCS code. Other claims information includes average payments and charges, the number of services/products rented or sold and medical specialty (also known as provider type).
Combined CMS dataset
A combined dataset was created in [50] after processing Part B, Part D, and the DMEPOS datasets, containing all the attributes from each, along with the fraud labels derived from the List of Excluded Individuals and Entities (LEIE). The combining process involves a join operation on NPI, provider type, and year. Due to there not being a gender variable present in the Part D data, the authors did not include this variable in the join operation condition and used the gender labels from Part B while removing the gender labels gathered from the DMEPOS dataset after joining. In combining these datasets, it is limited to those physicians who have participated in all three parts of Medicare.
For each dataset (Part B, Part D and DMEPOS), the information was combined for all available calendar years. For Part B and DMEPOS, all attributes not present in each available year were removed. The Part D dataset had the same attributes in all available years. For Part B, the standard deviation variables were removed from 2012 to 2013 and standardized payment variables were removed from 2014 to 2015 as they were not available in the other years. For DMEPOS, the standard deviation variable was removed from 2014 to 2015 as it was not available in 2013. For all three datasets, all instances that either were missing both NPI and HCPCS/drug name values or had an invalid NPI were removed. For Part B, all instances with HCPCS codes referring to prescriptions were filtered out. The prescription-related codes are not actual medical procedures, but instead are for specific services listed on the Medicare Part B Drug Average Sales Price file. For the Part B dataset, eight features were kept while the other 22 were removed. For the Part D dataset, seven features were kept and the other 14 were removed. For the DMEPOS dataset nine features were kept and the other 19 were removed. The excluded attributes provide no specific information on the claims, drugs administered, or referrals, but rather encompass provider-related information, such as location and name, as well as redundant variables like text descriptions which can be represented by using the variables containing the procedure or drug codes. For Part D, variables that provided count and payment information for patients 65 or older were not included, as this information is encompassed in the retained variables. The combined dataset contains all the retained features from all three datasets. The purpose of this new dataset is to provide a more encompassing view into a physician's behavior over various branches of Medicare, over individual Medicare parts.
Google speech commands dataset
The Google speech commands (GSC) dataset v0.02 [67] consists of 105,829 one-second long audio files of 35 keywords, by 2618 speakers, with each file consisting of only one keyword encoded as linear 16-bit single-channel PCM values at a 16 kHz rate. A spectrogram using a fast Fourier transform (FFT) is computed for each wave file in the dataset. Frequencies are summed into 129 bins, and each 1-second sample is divided into 71-time bins. The image for each instance is 129 × 71. The number in each "pixel" represents the power spectral density in dB, and each image is scaled between 0 and 1, relative to the maximum and minimum dB in that image. Samples are not scaled to the maximum and minimum of the whole dataset because recordings were crowdsourced, so the volume for different recordings is not consistent.
IRMAS
The IRMAS dataset [68] is intended to be used for training and testing methods for the automatic recognition of predominant instruments in musical audio. The instruments considered are cello, clarinet, flute, acoustic guitar, electric guitar, organ, piano, saxophone, trumpet, violin, and human singing voice. It includes music from various decades from the past century, hence the difference in audio quality. The training data consists of 6705 audio files with excerpts of 3 s from more than 2000 distinct recordings. The testing data consists of 2874 excerpts with lengths between 5 and 20 s. No tracks from the training data were included. Unlike the training data, the testing data contains one or more predominant target instruments. All audio files are in a 16-bit stereo WAV format sampled at 44.1 kHz. We truncate the recordings in the dataset to 1 s in length and process them as spectrograms, like the preprocessing of the Google speech commands dataset.
IDMT-SMT-audio-effects
The IDMT-SMT-audio-effects [69] is a dataset for electric guitar and bass audio effects detection. The dataset consists of 55,044 WAV files with a single recorded note of which 20,592 are monophonic bass notes, 20,592 are monophonic guitar notes, and 13,860 are polyphonic guitar sounds. There are 11 different audio effects: feedback delay, slap back delay, reverb, chorus, flanging, phaser, tremolo, vibrato, distortion, overdrive, and no effect (unprocessed notes/sounds). For detailed descriptions of these audio effects please refer to [70].
Empirical methodology
We adopt the general convolutional network architecture demonstrated in recent years to advance the state-of-the-art in supervised classification [21]. We evaluate classification performance on a variety of CNN architectures. In these architectures, a series of convolutional layers for feature extraction are followed by fully-connected layers for classification. Max-pooling is used between convolutional layers to reduce the dimensionality of the network input, and dropout is used before fully-connected layers to prevent overfitting.
A suitable network architecture is selected for each dataset according to the input size and number of instances in the data, as specified in Table 1. Architecture (A) was applied to Medicare Part B, D and combined datasets, architecture (B) to the MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets, and architecture (C) to the LFW dataset. The architecture (D) was utilized to the Sentiment140, Yelp, and Amazon datasets, architecture (E) to the Google Speech Commands, IRMAS, and IDMT-SMT-Audio-Effects datasets, and architecture (F) to the MS-Celeb dataset. The depth of the configurations increases from left (A) to right (F), as more layers are added. In general, fewer convolutional layers are used for datasets with smaller number of samples, while deeper architectures are used for larger datasets. Unless otherwise specified, max-pooling is performed with a filter size and stride of 2, and convolutional layer inputs are padded to produce same-size outputs.
Table 1 CNN and NN Configurations
Within each dataset, experiments are carried out using the selected CNN architecture, modified only to fit the memory specifications of the activation functions. For example, a CNN for the MNIST dataset with 10 output filters in the first convolutional layer would be modified to output 20 filters as input to a maxout 2-1 activation function. The only layer in each network which is not modified according to the activation function is the final classification layer, where a softmax activation is applied. For networks trained on the MS-Celeb dataset, we use the 9-layer light CNN framework presented in [16], which contains five convolution layers, four NIN layers [40], activation layers and four max-pooling layers. We implement each NIN layer as a convolutional layer with a filter size of one.
The models were trained with a learning rate of 0.01 on all but the MS-Celeb dataset. For this dataset, the learning rate was 0.0001, which is the rate the CNN architecture for the MS-Celeb could converge. Rather than tune each network in our comparison optimally with a validation set, we implement a set of uniform stopping criteria during training to maintain a consistent protocol so that network performance on a test set is suitable for comparison across activations [21]. Early stopping criteria is the same for every dataset, with the slope of the test loss calculated over a running window of the past three epochs on MNIST, FMNIST, CIFAR and LFW datasets, and past four epochs on the rest of the datasets. When the slope becomes positive, testing loss no longer decreases and network training is stopped. The optimizer is stochastic gradient descent and the loss function is the categorical cross-entropy. Table 2 displays the momentum, and batch size per dataset.
Table 2 Momentum and batch size per dataset
The number of trainable parameters using ReLU and maxout activation functions are presented in Table 3. Doubling the ReLU units not only doubles the number of trainable parameters, each layer will output twice as many feature maps. Unlike the maxout unit with k = 2 (maxout 2-1) has twice as many parameters, but each unit still outputs one feature map. Similarly, a maxout unit with k = 3 (maxout 3-1) has 3x the number of parameters, but still only outputs one feature map.
Table 3 Number of trainable parameters per dataset
The classification tasks and datasets used to evaluate the activation functions are:
Image classification using the MNIST, F-MNIST, CIFAR-10, and CIFAR-100 datasets. These are the most widely used datasets in machine learning, MNIST and CIFAR-10 were the two most common datasets in NIPS 2017 [71].
Facial verification using the LFW dataset. The LFW is a popular benchmark dataset that contains diverse illumination conditions combined with variations in pose and expressions. Companies, independent teams and data-scientists use this dataset to verify the quality of their algorithms.
Facial recognition using the MS-Celeb dataset. The dataset was tested in [16] using a 9-layer light CNN framework using maxout 2-1. The MS-Celeb-1M dataset contains massive noisy labels, a challenge a facial recognition system has to attenuate and if possible eliminate.
Sentiment analysis using two Amazon product data subsets (1 and 4 million reviews), Sentiment140 and two subsets from the Yelp text datasets (500,000 and 1,000,000 reviews). We used the datasets constructed in [59]. This provides us with a dataset consisting of short text (sentiment140) and four datasets with longer instances (Amazon and Yelp reviews).
Fraud detection using the Medicare Part B, Part D, DMEPOS, and combined Medicare datasets. The datasets were preprocessed, and the combined CMS created in [50]. The datasets focus on the detection of Medicare fraud using the Medicare Provider Utilization and Payment Data: Physician and Other Supplier (Part B), Medicare Provider Utilization and Payment Data: Part D Prescriber (Part D), and Medicare Provider Utilization and Payment Data: Referring Durable Medical Equipment, Prosthetics, Orthotics and Supplies (DMEPOS).
Speech command classification using the Google speech commands dataset. The dataset contains single-word speaking commands that can be converted to 129 × 71 pixel images. Other speech command datasets have recordings with a longer duration causing memory constraints when testing maxout activation functions.
Audio classification using the IRMAS and IDMT-SMT-Audio-Effects datasets. Similar to the Google speech commands dataset, the IRMAS and IDMT-SMT-Audi-Effects datasets were truncated to 1 s of audio and converted to 129 × 71 and 100 × 100 pixel images respectively.
Face verification, or authentication is a one-to-one (1:1) match that compares a test face image against a gallery face image whose identity is being claimed. The current standard for benchmarking performance on unconstrained face verification is the Labeled Faces in the Wild (LFW) dataset [54]. We compare activations on View 2 of the dataset, which consists of image pairs that are labeled either matching or not matching. We process each face image to be grayscale and cropped to 128 × 128 pixels. After preprocessing, we train using 90% of the View 2 pairs and evaluate each network on the final 10% of pairs. Face identification is a one-to-many matching (1:N) process that compares a test face image against all the gallery images in a face database to determine the identity of the test face. Face identification was tested on the MS-Celeb dataset. To facilitate efficient training, we filter the MS-Celeb subset to include slightly over 100,000 face images corresponding to 1000 celebrity classes. We use an additional test set of approximately 10,000 images to validate each network. Each image is cropped to 128 × 128 pixels.
For our sentiment analysis, the text was embedded as proposed by Prusa et al. [72]. The authors propose a log(m) character embedding where each character in the alphabet is given an integer value, where m is the alphabet size. The equivalent binary representation of a character's integer value is then found and turned into a vector of 0s and 1s. This results in a denser representation compared to 1-of-m character embedding [73]. This embedding was tested against the 1-of-m embedding using an alphabet size of 70 and 256. Results show significantly higher performance and a faster training time when using the log(m) representation of the data.
As the Medicare datasets are highly imbalanced, we employ random under-sampling (RUS) to mitigate the adverse effects of class imbalance. RUS is the process of randomly removing instances from the majority class of a dataset in order to balance the ratio (non-fraudulent/fraudulent). We generate a class distribution (majority:minority) of 50:50. There are 2036 samples in Medicare Part D, 2818 in Medicare part B, 1275 in DMEPOS and 946 in the CMS combined dataset.
ReLU is also evaluated with 2x, 3x and 6x the number of filters in each convolutional layer. The purpose of including these variants is to consider the impact of increased neurons on the accuracy, training time and memory usage of NNs independent of the maxout activation. Because maxout incorporates both the max operation and the use of duplicate neurons with additional memory, it is necessary to consider how each component of the activation contributes to its performance.
Maxout is evaluated with the following combinations of input and output neuron quantities: 2-1, 3-1, 3-2 and 6-1. We compute maxout for our four activations using the equations below, which are suitable for parallelization with modern deep learning software and parallel computer hardware. In general, we use maximum (max) and minimum (min) operations with two inputs to achieve maximum computational efficiency during training.
$$maxout\;2\text{-}1 \left( {x_{1} ,x_{2} } \right) = { \text{max} }\left( {x_{1} ,x_{2} } \right)$$
$$maxout\;3\text{-}1 \left( {x_{1} ,x_{2} ,x_{3} } \right) = { \text{max} }\left( {x_{1} ,{ \text{max} }\left( {x_{2} ,x_{3} } \right)} \right)$$
$$maxout\;6\text{-}1 \left( {x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} ,x_{6} } \right) = { \text{max} }\left( {x_{1} ,{ \text{max} }\left( {x_{2} ,{ \text{max} }\left( {x_{3} ,{ \text{max} }\left( {x_{4} ,{ \text{max} }\left( {x_{5} ,x_{6} } \right)} \right)} \right)} \right)} \right)$$
$$\begin{aligned} & maxout\;3\text{-}2 \left( {x_{1} ,x_{2} ,x_{3} } \right) = { \text{max} }\left( {x_{1} ,{ \text{max} }\left( {x_{2} ,x_{3} } \right)} \right), \\ & { \text{min} }\left( {{ \text{max} }(x_{1} ,x_{2} ),\text{min} \left( {{ \text{max} }\left( {x_{2} ,x_{3} } \right),{ \text{max} }\left( {x_{1} ,x_{3} } \right)} \right)} \right) \hfill \hfill \\ \\ \end{aligned}$$
While it would be ideal to record the wall clock time needed to train each network, modern high-performance computing environments present hardware and software challenges which make it difficult to safely compare training time across runs or activations. Thus, we produce a metric which represents the time cost of training with a particular activation function. This metric is produced for each activation on an isolated desktop computing environment. We record the wall clock time required to train each network in our comparison for 100 batches and take the average time across 10 runs on a single desktop computer with 32 GB of RAM running Ubuntu 16.04 with an intel i7 7th generation CPU and an NVIDIA 1080ti GPU. Those times are produced independently for each activation on each dataset. We note that the 100 batch intervals in our timing measurements do not constitute complete training epochs. Computations are timed over 100 batches to average out any variability in individual batch times and facilitate accurate comparison between activations.
A total of 11 activation functions were evaluated, where each experiment compared:
Classification accuracy
Average 100 batches time
Average 100 batches total training time (average 100 batches time multiplied by number of epochs to converge).
Due to memory constraints, ReLU3x and ReLU6x were not included in comparisons on the text, face classification, fraud detection and audio experiments. These ReLU variants produce layers with 3x and 6x more feature maps than the maxout units causing memory limitations particularly with large datasets. Maxout 6-1 was also excluded from the MS-Celeb task as it was difficult to train without tuning. The training did not converge most of the time with the SeLU activation function on the Amazon and Yelp datasets; out of many runs it was only possible to get one successful training on Amazon1M and Yelp1M. Similarly, maxout 6-1 and maxout 3-2 on Amazon4M failed to converge on large text datasets. In future work, hyperparameter tuning for ReLU3x, ReLU6x, maxout 6-1, and maxout 3-2 will be done to find the best performance for the models. Table 4 displays the number of experiments per activation function and dataset.
Table 4 Number of experiments per activation and dataset
In each dataset, we use a train/test split of approximately 90%/10%. The train and test size per dataset are presented in Table 5. Because we apply a consistent early stopping criterion, we report results of our comparison done directly on a test set, without an additional validation set. We implemented our models in Keras [74] with TensorFlow [75] as the backend.
Table 5 Train and test size per dataset
Experimental results and discussion
Based on preliminary observations, it is evident the sigmoid activation does not perform well in CNNs, and the maxout p-norm models are very difficult to train. For these reasons, both activation functions are not included in the results.
In order to test the statistical significance of performances of the type of activation (maxout vs. other activations) across all the datasets, one-way analysis of variance (ANOVA) [76] was performed. In this ANOVA test, the results from 544 evaluations were considered together, and all tests of statistical significance utilized a significance level α of 5%. The factor is significant if the p value is less than 0.05. The ANOVA table is presented in Table 6. As shown the factor is not significant, indicating the activation type does not make a difference in the classification accuracy (p-value is not less than 0.05).
Table 6 One-way ANOVA for type of activation and classification task
A comparison between maxout and the rest of the activations is presented in Fig. 10, displaying graphs with each group mean represented by the symbol (◦) and the 95% confidence interval as a line through the symbol. Maxout activations are not statistically better than the rest of the activation functions.
Multiple comparisons, type of activations
The best activation average accuracy per dataset, its average time to train 100 batches and average epochs multiplied by average 100 batches time (average 100 batches training time) are presented in Table 7. The activation average accuracy and epochs columns in Table 7 represent the best performing activation function for each dataset. Data in these columns is averaged over the number of training runs specified in Table 4. On the image and face datasets, ReLU with 6x filters reported the highest accuracy on all datasets except the MS-Celeb dataset.
Table 7 Best activation and its results per dataset
There is no statistical difference between the activation functions, but on average ReLU6x reported the highest accuracy of 85.19% on the image datasets (Fig. 11). The average accuracy presented on the figures, is also the average accuracy over the number of training runs specified in Table 4. Activation function means are significantly different if their intervals are disjoint, and are not significantly different if their intervals overlap. On the facial verification task, ReLU6x also achieved the highest accuracy, while on the facial recognition task, SeLU recorded the highest accuracy (Table 7).
Multiple comparisons, activations on image datasets
ReLU with 2x filters reported the highest accuracy on Sentiment140 and Yelp500K datasets (Table 7). On Yelp1M, ReLU achieved the highest accuracy and maxout 3-2 and 6-1 reported the best accuracies on the Amazon1M and Amazon4M datasets respectively. On average, ReLU2x reported the highest accuracy of 90.41% (Fig. 12).
Multiple comparisons, activations on text datasets
SeLU reported the highest accuracy on Medicare Part B, DMEPOS and the combined CMS datasets (Table 7). On Medicare Part D, maxout 2-1 and maxout 6-1 achieved the highest accuracy. On average, SeLU reported the highest accuracy of 69.7% (Fig. 13). This suggests that SeLU is effective for the medical fraud detection task using a NN. On the Medicare Part D dataset, maxout 2-1 and maxout 6-1 obtained the highest accuracy, followed by SeLU, ReLU2x and maxout 3-2 with a 0.5% difference in value. This confirms that SeLU is also effective in this dataset.
Multiple comparisons, activations on medical datasets
Maxout 3-2 reported the highest accuracy on Google speech commands, and on IRMAS ReLU2x achieved the highest accuracy. SeLU recorded the highest accuracy on the IDMT-SMT-Audio-Effects dataset (Table 7). On average, maxout 2-1 reported the highest accuracy of 83.19% (Fig. 14). This indicates maxout 2-1 is effective for the sound recognition task using spectrograms in combination with CNNs.
Multiple comparisons, activations on sound datasets
ReLU with 6x filters delivered the highest accuracy on all image datasets and the LFW dataset. Although ReLU6x and ReLU3x were not tested on the text datasets, other ReLU variants continued to record the highest accuracies. ReLU with 2x filters and ReLU obtained the highest accuracies on the text datasets except for the Amazon datasets (Table 7). A similar result occurred on the sound datasets. Although on average, maxout 2-1 recorded the highest accuracy, ReLU2x reported a 0.36% difference in value. Adding multiple layers of filters was enough for ReLU to achieve the highest classification accuracy on the image and text datasets. This suggests that adding more layers on ReLU2x could increase the accuracy on the MS-Celeb, Amazon, IDMT-SMT-Audio-Effects and Google speech commands datasets, but hyperparameter tuning might be required.
In this study, we also performed the multiple comparison tests using Tukey's Honestly Significant Difference (HSD) test to further investigate these results. The HSD is a statistical test comparing the mean value of the performance measure for the different activation functions. All tests of statistical significance utilize an \(\alpha = 0.05\). Two activation functions with the same block letter are not significantly different with 95% statistical confidence (e.g. group a is significantly different than group b). In Table 8, the letters in the third column indicate the HSD grouping of the activation accuracy. That is, if two activations have the same letter in the HSD column, their accuracies are not significantly different.
Table 8 Activation HSD test on image datasets
On the image datasets, the HSD test shows ReLU6x is significantly better than the rest of the activations, and ReLU3x is better than the maxout activations. Based on these results, we see that the activations using the most memory are at the top, and maxout methods together with ReLUx are better than ReLU, LReLU, SeLU, and tanh. ReLU6x reported the highest average 100 batches training time (last column in Table 8) of 13.56 s, 1.56x higher than ReLU3x. This is expected as there are three times more layers to process. Maxout activation functions have a lower training time than ReLU3x and ReLU6x but have a higher training time than the traditional activation functions.
On the LFW dataset, the highest accuracy was reported by ReLU6x with 79.67%. It also had the highest average 100 batches time with 26.75 s. The lowest time of 2.07 s was reported by ReLU. The SeLU recorded the highest accuracy on the MS-Celeb dataset with 97.5%, but on average, for both face datasets maxout 3-2 achieved the highest average accuracy of 87.25% (Fig. 15).
Multiple comparisons, activations on face datasets
On the MS-Celeb dataset, maxout 3-2 had the highest average 100 batches time of 156.94 s, and ReLU had the lowest of 7.92 s (the 100 batches time and 100 batches training time on each dataset are not displayed in a table). ReLU3x, ReLU6x and maxout 6-1 were not evaluated on the MS-Celeb dataset. The HSD test on the face datasets (Table 9) shows maxout 3-2 with the highest average accuracy, which is statistically better than LReLU, SeLU, ReLU6x, ReLU3x, and maxout 6-1. Although the LFW and MS-Celeb datasets contain face images, verification and identification tasks are different. Consequently, activation functions reported opposite results on both datasets. The extreme was SeLU which logged the highest accuracies on the MS-Celeb dataset, but the lowest on the LFW dataset.
Table 9 Activation HSD test on face datasets
In the face group, a higher training time did not translate into a higher accuracy. A total of five activation functions had a higher average training time than ReLU2x. All maxout functions except for maxout 6-1 reported the highest average 100 batch training time.
The combined results of the image and face datasets are presented in Table 10. Although ReLU6x was not tested on the MS-Celeb dataset, it is again statistically better than the rest of the activations, and its average training time is lower than any maxout activation, except for maxout 6-1. Maxout activations are statistically better than the traditional activation functions. On average, SeLU reported a low average accuracy but obtained the highest on the face identification task (Table 7). This suggests SeLU is efficient for this task.
Table 10 Activation HSD test on image and face datasets
The HSD test on text datasets (Table 11) shows six activations are statistically indistinguishable from one another (they all have the block letter 'a' in the HSD column). ReLU2x scored the highest or second highest accuracy on all text datasets except Yelp1M, while maxout 3-2 recorded within the top three accuracies except on the Sentiment140 and Yelp1M datasets. All maxout and ReLU activations are not significantly different from each other but they are statistically different from tanh and SeLU. Maxout activations took longer to train than ReLU and ReLU2x by our average epochs multiplied by the average 100 batches time metric, which considers both the computational cost and time to converge of a model. The lowest average 100 batches time was ReLU with 7.41 s, which surprisingly delivered the third highest average classification accuracy.
Table 11 Activation HSD test on text datasets
On the medical datasets, all the maxout and ReLU variants had a similar performance, and SeLU performed better than maxout 6-1, ReLU and LReLU (Table 12). The NN architecture for the medical datasets only had two layers, and more layers did not improve or change the activation's performance. Consequently, the performance is very similar in a shallow architecture, and this is reflected in the HSD test.
Table 12 Activation HSD test on medical datasets
A higher training time did not translate into a higher accuracy. Maxout and ReLU activations had a higher training time than SeLU. The lowest average 100 batches training time was SeLU with 13.01 s. Although ReLU reported a low average 100 batches time, it took many epochs to converge, contrary to SeLU that converged faster than any other activation function on the NN architecture for the medical datasets. SeLU reported the fastest training time. On average, SeLU tends to converge 1.7× faster than tanh, and 2.3× faster than maxout 3-2, where maxout 3-2 reported the slowest training time across all activations.
On the sound datasets, the HSD test (Table 13) shows that LReLU, all maxout, and ReLU activations are not significantly different from each other. Maxout 2-1 performed better than tanh and SeLU. Maxout 6-1 reported the highest average 100 batches training time with 3219.95. It also, recorded the fifth highest average accuracy, while maxout 2-1 was 3.27× faster than maxout 6-1 and recorded the highest average accuracy. Maxout 3-2 converged faster than maxout 6-1. Although maxout 3-2 reported the highest average 100 batches time, the average 100 batches training time was lower than that of maxout 6-1, and maxout 3-2 recorded the second highest average accuracy. SeLU reported the second fastest average 100 batches training time, but also the second lowest average accuracy. On average, maxout 2-1 tends to converge 1.32× faster than ReLU2x.
Table 13 Activation HSD test on sound datasets
The HSD test on all datasets (Table 14) shows ReLU6x is significantly better than the rest of the activations, and ReLU3x is better than maxout and traditional activations. Although ReLU6x and ReLU3x were only evaluated on the image and LFW datasets, the HSD test suggests these two ReLU variants could provide higher accuracies than maxout or ReLU2x activation functions. Maxout activations and ReLU2x reported a similar performance, but maxout 3-1 was the only variant that did not reach the top performance in any of the experiments conducted in this study. On average, SeLU reported the lowest average accuracy, but recorded the highest accuracy with the MS-Celeb, IDMT-SMT-Audio-Effects, and three Medicaid datasets. This indicates an activation's performance may vary across data domains.
Table 14 Activation HSD test on all datasets
In terms of training time, maxout 3-2 reported the highest. Maxout 2-1 reported the lowest of all maxout variants, but its time was still higher than any of the ReLU variants. The average training time of ReLU6x and ReLU3x is very low as they were only tested on the image and LFW datasets. ReLU2x on average performed better than the rest of the activations on the text and audio datasets. Its average training time is above those of the traditional activation functions (Fig. 16).
Multiple comparisons, activations on all datasets
We can categorize the activation functions into three groups: maxout activations as the first group, ReLU, Tanh, LReLU and SeLU (low memory usage activation functions) as the second group and ReLU2x, ReLU3x and ReLU6x (higher memory usage activation functions) as the third group. The HSD test on all datasets (Table 15) shows ReLU2x, 3X and 6X are statistically better than the rest of the activation functions. Evaluating the results across all the studied datasets, we observe that the higher the memory usage the higher the accuracy. When evaluating the training time, we observe that traditional and higher memory usage activation functions are statistically faster than the maxout activation functions. On average, the maxout activation functions reported the slowest average training time; this is expected as the number of operations is greater compared to the rest of the activation functions.
Table 15 HSD test on maxout, low memory usage activation functions and ReLU2x, 3x and 6x
The current literature comparing maxout and other activation functions lacks to consider if an increase in the number of convolutional filters in ReLU networks, surpasses the performance of any maxout variant. Experiments that would use more than the maximum allowed memory per GPU were outside the scope of this work. Although ReLU6x was not tested on all datasets, our experiments provided evidence that is significantly better in terms of accuracy than any maxout variant. Furthermore, ReLU6x's average training time is lower than that of any maxout variant, even though ReLU6x contains the highest number of trainable parameters on all the tested datasets (Table 3).
The results from the image datasets indicate that sextupling the number of convolutional filters on ReLU performed better than the rest of the activation functions, but made training more difficult due to large number of parameters. On the sentiment classification task, ReLU2x and maxout 3-2 are likely to produce the highest classification accuracy results compared to the rest of the activations analyzed in this study. On the audio datasets, our experiments suggest that given the sound recognition task on a CNN architecture and the conversion of sounds into spectrograms, based on performance values, maxout 2-1 is likely to produce the best classification accuracy results. It is important to note that the difference between ReLU and ReLU2x is a tunable hyperparameter. The maxout activations performed better than ReLU activation functions only on the Amazon, Medicare Part D, and Google speech commands datasets. On the medical fraud detection task, our findings indicate that given the medical fraud detection task on a NN architecture, SeLU is likely to produce the highest classification accuracy results compared to the rest of the activation functions analyzed in this study. SeLU also demonstrated its efficacy on the facial identification task, as it recorded the highest average accuracy.
Across all datasets, maxout variants provided a better average accuracy than ReLU, LReLU, SeLU and tanh, but on average the training time is slower. Results indicate that ReLU, with more filters, was the top performer, with the tradeoff of high memory usage. On average, ReLU2x converges 2.62× faster than maxout 3-2 but it is 1.85× slower than ReLU. There is no relationship between the activation functions that have a higher training time and the classification accuracy performance, but clearly adding more convolutional filters enhanced ReLU. Due to high performance and fast training relative to other top performing activations, ReLU6x is the recommended activation function for image datasets, and ReLU2x for text datasets. On the sound datasets, maxout 2-1 is the recommended activation function. Our results suggest the higher the memory usage the higher the accuracy. On average, ReLU6x will use 17.78 times more trainable CNN parameters than maxout 2-1, thus indicating a higher memory usage (Table 3) for ReLU6x.
Future work will involve conducting additional empirical studies with ReLU3x and ReLU6x on big data and hyperparameter tuning recommendations that were outside the scope of this work. Also, future work could include additional deep network architectures and domains.
The datasets used during the current study are available from the corresponding author on reasonable request.
ANOVA:
CIFAR:
Canadian Institute for Advanced Research
CNN:
convolutional neural network
DMEPOS:
durable medical equipment, prosthetics, orthotics and supplies
DNNs:
GSC:
google speech commands
HCPCS:
healthcare common procedure coding system
HSD:
honestly significant difference
LEIE:
list of excluded individuals and entities
LFW:
labeled faces in the wild
LR:
LRELU:
leaky rectified linear unit
max-feature-map
maxout network in network
MNIST:
mixed national institute of standards and technology
NIN:
network in network
NPI:
national provider number
PRIM:
patient rule induction method
PUF:
public use file
RELU:
RESECH:
rectified hyperbolic secant
ROC:
receiver operating characteristic
RRELU:
randomized leaky rectified linear unit
random under-sampling
SELU:
TANH:
Delalleau O, Bengio Y. Shallow vs. deep sum-product networks. In: Advances in neural information processing systems. 2011. p. 666–74.
Sze V, Chen Y, Yang T, Emer J. Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE. 2017;105(12):2295–329. https://doi.org/10.1109/JPROC.2017.2761740.
Nwankpa C, Ijomah W, Gachagan A, Marshall S. Activation functions: comparison of trends in practice and research for deep learning. 2018. arXiv:1811.03378.
Nair V, Hinton G. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10). 2010.
Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097–105.
Li Y, Ding P, Li B. Training neural networks by using power linear units (PoLUs). 2018. arXiv:1802.00212.
Ramachandran P, Zoph B, Le Q. Searching for activation functions. In: Sixth international conference on learning representations (ICLR), Vancouver. 2018.
Severyn A, Moschitti A. Unitn: Training deep convolutional neural network for twitter sentiment classification. In: Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015). 2015. https://doi.org/10.18653/v1/s15-2079.
Li J, Ng W, Yeung D, Chan P. Bi-firing deep neural networks. Int J Mach Learn Cybern. 2014;5(1):73–83.
Zhao H, Liu F, Li L, Luo C. A novel softplus linear unit for deep convolutional neural networks. Appl Intell. 2017;48(7):1707–20. https://doi.org/10.1007/s10489-017-1028-7.
Liew S, Khalil-Hani M, Bakhteri R. Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems. Neurocomputing. 2016;216:718–34. https://doi.org/10.1016/j.neucom.2016.08.037.
Sodhi S, Chandra P. Bi-modal derivative activation function for sigmoidal feedforward networks. Neurocomputing. 2014;143:182–96. https://doi.org/10.1016/j.neucom.2014.06.007.
Nambiar V, Khalil-Hani M, Sahnoun R, Marsono M. Hardware implementation of evolvable block-based neural networks utilizing a cost efficient sigmoid-like activation function. Neurocomputing. 2014;140:228–41. https://doi.org/10.1016/j.neucom.2014.03.018.
Goodfellow I, Warde-Farley D, Mirza M, Courville A, Bengio Y. Maxout Networks. In: Proceedings of the 30th international conference on machine learning (ICML 2013). 2013.
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–58.
Wu X, He R, Sun Z, Tan T. A light CNN for deep face representation with noisy labels. IEEE Trans Inf Forensics Secur. 2015;13(11):2884–96. https://doi.org/10.1109/tifs.2018.2833032.
Chang J, Chen Y. Batch-normalized maxout network in network. 2015. arXiv:1511.02583.
Cai M, Shi Y, Liu J. Deep maxout neural networks for speech recognition. In: IEEE workshop on automatic speech recognition and understanding 2013. P. 291–6. https://doi.org/10.1109/asru.2013.6707745.
Park S, Kwak N. Analysis on the dropout effect in convolutional neural networks. In: Asian conference on computer vision. 2016. p. 189–204.
Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324. https://doi.org/10.1109/5.726791.
Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Toronto: University of Toronto; 2009.
Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A. Reading digits in natural images with unsupervised feature learning. In: NIPS workshop on deep learning and unsupervised feature learning. 2011.
Jebbara S, Cimiano P. Aspect-based relational sentiment analysis using a stacked neural network architecture. In: Proceedings of the twenty-second European conference on artificial intelligence. 2016.
Toth L. Convolutional deep maxout networks for phone recognition. In: Proceedings of the international speech communication association (INTERSPEECH). 2014.
Sainath T, Kingsbury B, Mohamed A, Dahl G, Saon G, Soltau H, Beran T, Aravkin A, Ramabhadran B. Improvements to deep convolutional neural networks for LVCSR. In: IEEE workshop on automatic speech recognition and understanding (ASRU). 2013. https://doi.org/10.1109/ASRU.2013.6707749.
Sainath T, Kingsbury B, Saon G, Soltau H, Mohamed A, Dahl G, Rmabhadran B. Deep convolutional neural networks for large-scale speech tasks. Neural Netw. 2015;64:39–48. https://doi.org/10.1016/j.neunet.2014.08.005.
Yoon K. Convolutional neural networks for sentence classification. In: Conference on empirical methods in natural language processing (EMNLP). 2014.
Poria S, Cambria E, Gelbukh A. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In: Conference on empirical methods in natural language processing. 2015. https://doi.org/10.18653/v1/d15-1303.
Tóth L. Phone recognition with hierarchical convolutional deep maxout networks. EURASIP J Audio Speech Music Process. 2015;2015:25. https://doi.org/10.1186/s13636-015-0068-3.
Tóth L. Combining time-and frequency-domain convolution in convolutional neural network-based phone recognition. In: IEEE international conference on acoustics, speech and signal processing (ICASSP). 2014. https://doi.org/10.1109/icassp.2014.6853584.
Deng L, Abdel-Hamid O, Yu D. A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In: IEEE international conference on acoustics, speech and signal processing (ICASSP). 2013. https://doi.org/10.1109/icassp.2013.6638952.
Sainath T, Mohamed A, Kingsbury B, Ramabhadran B. Deep convolutional neural networks for LVCSR. In: IEEE international conference on acoustics, speech and signal processing (ICASSP). 2013.
Mikolov T, Sutskever I, Chen K, Corrado G, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. 2013. p. 3111–9.
Johnson R, Zhang T. Effective use of word order for text categorization with convolutional neural networks. In: Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies, Denver. 2015. https://doi.org/10.3115/v1/n15-1011.
Xu B, Wang N, Chen T, Li M. Empirical evaluation of rectified activations in convolutional network. 2015. arXiv:1505.00853.
Maas A, Hannun A, Ng A. Rectifier nonlinearities improve neural network acoustic models. In: International conference on machine learning (ICML). 2013.
He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. 2015. https://doi.org/10.1109/iccv.2015.123.
Mishkin D, Sergievskiy N, Matas J. Systematic evaluation of convolution neural network advances on the imagenet. Comput Vis Image Underst. 2017;161:11–9. https://doi.org/10.1016/j.cviu.2017.05.007.
Swietojanski P, Li J, Huang J. Investigation of maxout networks for speech recognition. In: IEEE international conference on acoustics, speech and signal processing (ICASSP). 2014. https://doi.org/10.1109/icassp.2014.6855088.
Lin M, Chen Q, Yan S. Network in network. In: Proceedings of the international conference on learning representations (ICLR). 2014.
Liao Z, Carneiro G. On the importance of normalisation layers in deep learning with piecewise linear activation units. In: IEEE winter conference on applications of computer vision (WACV). 2016. https://doi.org/10.1109/wacv.2016.7477624.
Oyedotun O, Shabayek A, Aouada D, Ottersten B. Improving the capacity of very deep networks with maxout units. In: IEEE international conference on acoustics, speech and signal processing. 2018. https://doi.org/10.1109/icassp.2018.8461436.
Njikam A, Zhao H. A novel activation function for multilayer feed-forward neural networks. Appl Intell. 2016;45(1):75–82. https://doi.org/10.1007/s10489-015-0744-0.
Goodfellow I, Mirza M, Xiao D, Courville A, Bengio Y. An empirical investigation of catastrophic forgetting in gradient-based neural networks. In: International conference on learning representations (ICLR). 2014.
Zhang X, Trmal J, Povey D, Khudanpur S. Improving deep neural network acoustic models using generalized maxout networks. In: IEEE international conference in acoustics, speech and signal processing (ICASSP). 2014. https://doi.org/10.1109/icassp.2014.6853589.
Baziotis C, Pelekis N, Doulkeridis C. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017). 2017. https://doi.org/10.18653/v1/s17-2126.
Zhang Y, Pezeshki M, Brakel P, Zhang S, Bengio C, Courville A. Towards end-to-end speech recognition with deep convolutional neural networks. In: Sixteenth annual conference of the international speech communication association, interspeech. 2016. https://doi.org/10.21437/interspeech.2016-1446.
Branting L, Reeder F, Gold J, Champney T. Graph analytics for healthcare fraud risk estimation. In: Proceedings of the 2016 IEEE/ACM international conference on advances in social networks analysis and mining. 2016. https://doi.org/10.1109/asonam.2016.7752336.
Sadiq S, Tao Y, Yan Y, Shyu M. Mining Anomalies in Medicare Big Data Using Patient Rule Induction Method. In: IEEE third international conference on multimedia big data (BigMM). 2017. https://doi.org/10.1109/bigmm.2017.56.
Herland M, Khoshgoftaar TM, Bauder R. Big Data fraud detection using multiple medicare data sources. J Big Data. 2018;5(1):29. https://doi.org/10.1186/s40537-018-0138-3.
Klambauer G, Unterthiner T, Mayr A, Hochreiter S. Self-normalizing neural networks. In: Advances in neural information processing systems. 2017. p. 971–80.
Shin HC, Orton M, Collins D, Doran S, Leach M. Organ detection using deep learning. Medical image recognition, segmentation and parsing. London: Academic Press; 2016. p. 123–53. https://doi.org/10.1016/b978-0-12-802581-9.00007-x.
Xiao H, Rasul K, Vollgraf R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. 2017. arXiv:1708.07747.
Huang G, Ramesh M, Berg T, Learned-Miller E. Labeled faces in the wild: a database for studying face recognition in unconstrained environments. 2017.
Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. Comput Vis Pattern Recognit. 2011;34:56. https://doi.org/10.1109/cvpr.2001.990517.
Guo Y, Zhang L, Hu Y, He X, Gao J. MS-Celeb-1M: a dataset and benchmark for large scale face recognition. In: European conference on computer vision. 2016. https://doi.org/10.1007/978-3-319-46487-9_6.
McAuley J, Pandey R, Leskovec J. Inferring networks of substitutable and complementary products. In: Proceedings of the international conference on knowledge discovery and data mining (KDD'15), Sydney, Australia. 2015. https://doi.org/10.1145/2783258.2783381.
Heredia B, Khoshgoftaar TM, Prusa JD, Crawford M. Integrating multiple data sources to enhance sentiment prediction. In: 2016 IEEE 2nd international conference on collaboration and internet computing (CIC). 2016. https://doi.org/10.1109/cic.2016.046.
Prusa JD, Khoshgoftaar TM. Training convolutional networks on truncated text. In: Proceedings of the IEEE international conference on tools with artificial intelligence. 2017. https://doi.org/10.1109/ictai.2017.00059.
Go A, Bhayani R, Huang L. Twitter sentiment classification using distant supervision. CS224N Project Rep Stanford. 2009;1(12):2009.
Centers for Medicare and Medicaid Services. Center for medicare and medicaid services. 2018. https://www.cms.gov/. Accessed 1 Nov 2018.
Centers for Medicare and Medicaid Services. Medicare provider utilization and payment data: physician and other supplier. 2018. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Physician-and-Other-Supplier.html. Accessed 1 June 2018.
CMS National Provider Identifier Standard. 2018. https://www.cms.gov/Regulations-and-Guidance/Administrative-Simplification/NationalProvIdentStand/. Accessed 4 November 2018.
CMS. HCPCS—general information. 2018. https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo/index.html. Accessed 4 Nov 2018.
Centers for Medicare and Medicaid Services. Medicare provider utilization and payment data: part D prescriber. 2018. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Part-D-Prescriber.html. Accessed 1 June 2018.
CMS. Medicare provider utilization and payment data: referring durable medical equipment, prosthetics, orthotics and supplies. 2018. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/DME.html. Accessed 4 Nov 2018.
Warden P. Speech commands: a dataset for limited-vocabulary speech recognition. 2018. arXiv:1804.03209.
Bosch J, Janer J, Fuhrmann F, Herrera P. A comparison of sound segregation techniques for predominant instrument recognition in musical audio signals. In: Proceedings of 13th international society for music information retrieval conference (ISMIR). 2012.
Stein M, Abeßer J, Dittmar C, Schuller G. Automatic detection of audio effects in guitar and bass recordings. In: Audio engineering society convention 128. Audio Engineering Society; 2010.
Zölzer U. DAFX: digital audio effects. New York: Wiley; 2011. https://doi.org/10.1002/9781119991298.
Hammer B. Popular datasets over time. 2019. https://www.kaggle.com/benhamner/popular-datasets-over-time/code. Accessed 31 May 2019.
Prusa JD, Khoshgoftaar TM. Designing a better data representation for deep neural networks and text classification. In: IEEE 17th international conference on information reuse and integration (IRI). 2016. https://doi.org/10.1109/iri.2016.61.
Zhang X, LeCun Y. Text understanding from scratch. Cornell University, Tech. Rep. 2015.
Chollet F. Keras. 2015. https://github.com/keras-team/keras. Accessed 1 Feb 2019.
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado G, Davis A, Dean J, Devin M, Ghemawat S. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. 2016.
Berenson ML, Levine DM, Goldstein M. Intermediate statistical methods and applications: a computer package approach. Upper Saddle River: Prentice-Hall, Inc; 1983. https://doi.org/10.2307/2288297.
The authors would like to thank the anonymous reviewers for their constructive evaluation of this paper, and also the reviewers in the Data Mining and Machine Learning Laboratory at Florida Atlantic University. Additionally, we acknowledge partial support by the NSF (CNS-1427536). Opinions, findings, conclusions, or recommendations in this paper are solely of the authors' and do not reflect the views of the NSF.
Florida Atlantic University, Boca Raton, USA
Gabriel Castaneda
, Paul Morris
& Taghi M. Khoshgoftaar
Search for Gabriel Castaneda in:
Search for Paul Morris in:
Search for Taghi M. Khoshgoftaar in:
GC performed the primary literature review, analysis for this work and drafted the manuscript. TMK worked with GC to develop the article's framework and focus. PM executed the tests and helped to finalize this work. All authors read and approved the final manuscript.
Correspondence to Gabriel Castaneda.
Castaneda, G., Morris, P. & Khoshgoftaar, T.M. Evaluation of maxout activations in deep learning across several big data domains. J Big Data 6, 72 (2019) doi:10.1186/s40537-019-0233-0
Maxout networks | CommonCrawl |
Robert Ellis (mathematician)
Robert Mortimer Ellis (1926–2013) was an American mathematician, specializing in topological dynamics.[2]
Robert Mortimer Ellis
Born(1926-09-16)September 16, 1926
Cleveland, Ohio, US
DiedDecember 6, 2013(2013-12-06) (aged 87)
NationalityAmerican
Known forEllis semigroup of a dynamical system;[1]
Ellis action of a dynamical system[1]
Scientific career
FieldsMathematics
Doctoral advisorWalter Gottschalk
Ellis grew up in Philadelphia, served briefly in the U.S. Army, and then studied at the University of Pennsylvania, where he received his Ph.D. in 1953.[3] He was a postdoc at the University of Chicago from 1953 to 1955. He was at Pennsylvania State University from 1955 to 1957 an assistant professor and from 1957 to 1963 an associate professor and at Wesleyan University from 1963 to 1967 a full professor. At the University of Minnesota he was a full professor from 1967 to 1995, when he retired as professor emeritus.[2]
He developed an algebraic approach to topological dynamics, leading to a strengthening with an alternate proof of the Furstenberg structure theorem.[4] He was the author or coauthor of about 40 research publications. In the year of his retirement, a conference was held in his honor at the University of Minnesota on April 5 and 6 1995; the conference proceedings were published in 1998 by the American Mathematical Society (AMS).[2][5] He was elected a Fellow of the AMS in 2012.
Ellis was predeceased by his wife. Upon his death he was survived by a grandchild, a daughter, and his son David, a professor of mathematics at Beloit College and a long-time collaborator with his father.[2]
References
1. Akin, Ethan (1997). Recurrence in Topological Dynamics: Furstenberg Families and Ellis Actions. Springer. pp. 133–134. ISBN 9780306455506.
2. "In Memoriam: Robert Ellis". School of Mathematics, University of Minnesota.
3. Robert Mortimer Ellis at the Mathematics Genealogy Project
4. Ellis, Robert (1978). "The Furstenberg structure theorem". Pacific J. Math. 76 (2): 345–348. doi:10.2140/pjm.1978.76.345.
5. Ellis, Robert, Mahesh G. Nerurkar, Douglas Dokken, and David Ellis. Topological Dynamics and Applications: A Volume in Honor of Robert Ellis: Proceedings of a Conference in Honor of the Retirement of Robert Ellis, April 5–6, 1995, University of Minnesota. Vol. 215. American Mathematical Soc., 1998.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
MICROWAVE SPECTRUM AND RING-BENDING VIBRATION OF 3-OXETANONE
1971-B-09.jpg (129.8Kb)
Gibson, James S.
Harris, David O.
The microwave spectrum of 3-oxetanone, \begin{eqnarray*}\begin{array}{c}{O}-{CH}_{2}-{C}=O\\-{CH}_{2}-\end{array}\end{eqnarray*}, has been observed and assigned in the ground vibrational state and first five excited states of the ring-bending vibrational mode. The spectra of both $C^{13}$ isotopically substituted species in natural abundance have also been measured. The features of the microwave spectrum were consistent with a single-minimum ring-bending potential function as determined in a previous far-infrared study. The observed variation of the rotational constants was fit to the vibrational states of this potential. A model for the variation was set up assuming curvilinear motion of the atoms with no bond stretching that reproduces the observed variation of the rotational constants in a reasonable manner. The vibrational potential was interpreted in terms of the energy contributions from ring-angle deformations and torsional interactions. The $C^{13}$ data were used to determine a partial structure of the molecule which gave: {r}({C-C})=1.521\AA<({C-C-C})=88.13^{\circ} The dipole moment for the ground vibrational state was measured as $\mu$ = 0.887D.
Author Institution: Department of Chemistry, University of California | CommonCrawl |
\begin{document}
\title{Chebyshev's bias for products of two primes}
\author{Kevin Ford} \email{[email protected], [email protected]} \author{Jason Sneed} \address{ Department of Mathematics, University of Illinois at Urbana-Champaign, 1409 West Green St., Urbana, IL 61801}
\date{\today} \begin{abstract} Under two assumptions, we determine the distribution of the difference between two functions each counting the numbers $\leqslant x$ that are in a given arithmetic progression modulo $q$ and the product of two primes.
The two assumptions are (i) the Extended Riemann Hypothesis for Dirichlet $L$-functions modulo $q$, and (ii) that the imaginary parts of the nontrivial zeros of these $L$-functions are linearly independent over the rationals. Our results are analogs of similar results proved for primes in arithmetic progressions by Rubinstein and Sarnak. \end{abstract}
\thanks{2000 Mathematics Subject Classification:11M06, 11N13, 11N25} \thanks{The research of K.~F. was supported in part by National Science Foundation grants DMS-0555367 and DMS-0901339.}
\maketitle
\section{Introduction}
\subsection{Prime number races}
Let $\pi(x;q,a)$ denote the number of primes in the progression $a\!\! \mod q$. For fixed $q$, the functions $\pi(x;q,a)$ (for $a\in A_q$, the set of residues coprime to $q$) all satisfy \begin{equation}\label{PNT} \pi(x,q,a) \sim \frac{x}{\varphi(q)\log x}, \end{equation} where $\varphi$ is Euler's totient function \cite{Da}. There are, however, curious inequities. For example $\pi(x;4,3) \geqslant \pi(x;4,1)$ seems to hold for most $x$, an observation of Chebyshev from 1853 \cite{Ch}. In fact, $\pi(x;4,3) < \pi(x;4,1)$ for the first time at $x = 26,861$ \cite{Le}. More generally, one can ask various questions about the behavior of \begin{equation}\label{Delta} \Delta(x;q,a,b) := \pi(x;q,a)-\pi(x;q,b) \end{equation} for distinct $a,b\in A_q$. Does $\Delta(x;q,a,b)$ change sign infinitely often? Where is the first sign change? How many sign changes with $x\leqslant X$ ? What are the extreme values of $\Delta(x;q,a,b)$? Such questions are colloquially known as \emph{prime race problems}, and were studied extensively by Knapowski and Tur\'an in a series of papers beginning with \cite{KT}.
See the survey articles \cite{FK} and \cite{GM} and references therein for an introduction to the subject and summary of major findings. Properties of Dirichlet $L$-functions lie at the heart of such investigations.
Despite the tendency for the function $\Delta(x;4,3,1)$ to be negative, Littlewood \cite{Li} showed that it changes sign infinitely often. Similar results have been proved for other $q,a,b$ (see \cite{JS} and references therein). Still, in light of Chebyshev's observation, we can ask how frequently $\Delta(x;q,a,b)$ is positive and how often it is negative. These questions are best addressed in the context of \emph{logarithmic density}. A set $S$ of positive integers has logarithmic density $$ \ensuremath{\delta}(S) = \lim_{x\to\infty} \frac{1}{\log x} \sum_{\substack{n\leqslant x \\ n\in S}} \frac{1}{n} $$ provided the limit exists. Let $\ensuremath{\delta}(q,a,b) = \ensuremath{\delta}(P(q,a,b))$, where $P(q,a,b)$ is the set of integers $n$ with $\Delta(n;q,a,b)>0$. In 1994, Rubinstein and Sarnak \cite{RS} showed that $\delta(q;a,b)$ exists, assuming two hypotheses (i) the Extended Riemann Hypothesis for Dirichlet $L$-functions modulo $q$ (ERH$_q$), and (ii) the imaginary parts of zeros of each Dirichlet $L$-function are linearly independent over the rationals (GSH$_q$ - Grand Simplicity Hypothesis). The authors also gave methods to accurately estimate the ``bias'', for example showing that $\delta(4;3,1)\approx 0.996$ in Chebyshev's case. More generally, $\delta(q;a,b)=\frac12$ when $a$ and $b$ are either both quadratic residues modulo $q$ or both quadratic nonresidues (unbiased prime races), but $\delta(q;a,b)>\frac12$ whenever $a$ is a quadratic non-residue and $b$ is a quadratic residue. A bit later we will discuss the reasons behind these phenomena. Sharp asymptotics for $\delta(q;a,b)$ have recently been given by Fiorilli and Martin \cite{FM}, which explain other properties of these densities.
\subsection{Quasi-prime races}
In this paper we develop a parallel theory for comparison of functions $\pi_2(x;q,a)$, the number of integers $\leqslant x$ which are in the progression $a\!\! \mod q$ and which are the product of two primes $p_1p_2$ ($p_1=p_2$ allowed). Put $$ \Delta_2(x;q,a,b) := \pi_2(x;q,a)-\pi_2(x;q,b), $$ let $P_2(q,a,b)$ be the set of integers $n$ with $\Delta_2(n;q,a,b)>0$, and set $\ensuremath{\delta}_2(q,a,b)=\ensuremath{\delta}(P_2(q,a,b))$. The table below shows all such quasi-primes up to 100 grouped in residue classes modulo 4.
\begin{center}
\begin{tabular}{|c|c|} \hline $pq \equiv 1 \pmod 4$ & $pq \equiv 3 \pmod 4$ \\ \hline 9 & 15\\ \hline 21 & 35\\ \hline 25 & 39\\ \hline 33 & 51\\ \hline 49 & 55\\ \hline 57 & 87\\ \hline 65 & 91\\ \hline 69 & 95\\ \hline 77 & \\ \hline 85 & \\ \hline 93 & \\ \hline \end{tabular} \end{center}
Observe that $\Delta_2(x;4,3,1) \leqslant 0$ for $x\leqslant 100$, and in fact the smallest $x$ with $\Delta_2(x;4,3,1) > 0$ is $x=26747$ (amazingly close to the first sign change of $\Delta(x;4,3,1)$). Some years ago Richard Hudson conjectured that the bias for products of two primes is always reversed from that of primes; i.e., $\delta_2(q;a,b)<\frac12$ when $a$ is a quadratic non-residue modulo $q$ and $b$ is a quadratic residue. Under the same assumptions as \cite{RS}, namely ERH$_q$ and GSH$_q$, we confirm Hudson's conjecture and also show that the bias is less pronounced.
\begin{thm}\label{thm1} Let $a,b$ be distinct elements of $A_q$. Assuming ERH$_q$ and GSH$_q$, $\delta_2(q;a,b)$ exists. Moreover, if $a$ and $b$ are both quadratic residues modulo $q$ or both quadratic non-residues, then $\delta_2(q;a,b)=\frac12$. Otherwise, if $a$ is a quadratic nonresidue and $b$ is a quadratic residue, then $$ 1 - \delta(q;a,b) < \delta_2(q;a,b) < \frac12. $$ \end{thm}
We can accurately estimate $\delta_2(q;a,b)$ borrowing methods from \cite[\S 4]{RS}. In particular we have $$ \delta_2(4;3,1) \approx 0.10572. $$ We deduce Theorem \ref{thm1} by connecting the distribution of $\Delta_2(x;q,a,b)$ with the distribution of $\Delta(x;q,a,b)$. Although the relationship is ``simple'', there is no elementary way to derive it, say by writing $$ \pi_2(x;q,a) = \frac12 \sum_{p\leqslant x} \pi\(\frac{x}{p};q,ap^{-1}\!\! \mod q\) + \frac12 \sum_{\substack{p\leqslant \sqrt{x} \\ p^2\equiv a\!\!\! \pmod{q}}} 1. $$ In particular, our result depends strongly on the assumption that the zeros of the $L$-functions modulo $q$ have only simple zeros. Let $N(q,a)$ be the number of $x\in A_q$ with $x^2\equiv a\pmod{q}$, and let $C(q)$ be the set of nonprincipal Dirichlet characters modulo $q$.
\begin{thm}\label{DeltaDelta2} Assume ERH$_q$ and for each $\chi\in C(q)$, $L(\frac12,\chi)\ne 0$ and the zeros of $L(s,\chi)$ are simple. Then $$ \frac{\Delta_2(x;q,a,b) \log x}{\sqrt{x} \log\log x} = \frac{N(q,b)-N(q,a)}{2\phi(q)} - \frac{\log x}{\sqrt{x}} \Delta(x;q,a,b) + \Sigma(x;q,a,b), $$
where $\frac{1}{Y} \int_1^Y |\Sigma(e^y;q,a,b)|^2\, dy = o(1)$ as $Y\to \infty$. \end{thm}
The expression for $\Delta_2$ given in Theorem \ref{DeltaDelta2} must be modified if some $L(s,\chi)$ has multiple zeros; see \S \ref{sec:outline} for details.
Figures 1,2 and 3 show
graphs corresponding to $(q,a,b)=(4,3,1)$, plotted on a logarithmic scale from $x=10^3$ to $x=10^9$. While $\Sigma(x;4,3,1)$ appears to be oscillating around $-0.2$, this is caused by some terms in $\Sigma(x;4,3,1)$ of order $1/\log\log x$, and $\log\log 10^9 \approx 3.03$. By Theorem \ref{DeltaDelta2}, $\Sigma(x;4,3,1)$ will (assuming ERH$_4$ and GSH$_4$)
eventually settle down to oscillating about 0.
\begin{figure}
\caption{$\Sigma(x;4,3,1)$}
\end{figure}
It is not immediate that Theorem \ref{thm1} follows from Theorem \ref{DeltaDelta2}. One first needs more precise information about the distribution of $\Delta(x;q,a,b)$ from \cite{RS}.
\newtheorem*{theoremRS}{Theorem RS}\label{Theorem RS} \begin{theoremRS}{\cite[\S 1]{RS}}\label{RSthm} Assume ERH$_q$ and GSH$_q$. For any distinct $a,b\in A_q$, the function \begin{equation}\label{Deltanorm} \frac{u \Delta(e^u;q,a,b)}{e^{u/2}} \end{equation} has a probabilistic distribution. This distribution (i) has mean $(N(q,b)-N(q,a))/\phi(q)$, (ii) is symmetric with respect to its mean, and (iii) has
a continuous density function. \end{theoremRS}
Assume $a$ is a quadratic nonresidue modulo $q$ and $b$ is a quadratic residue. Then $N(q,b)-N(q,a)>0$. Let $f$ be the density function for the distribution of \eqref{Deltanorm}, that is, $$ f(t) = \frac{d}{dt} \lim_{U\to\infty} \frac{1}{U} \text{meas} \{ 0\leqslant u\leqslant U : ue^{-u/2} \Delta(e^u;q,a,b) \leqslant t \}. $$ We see from Theorem RS that $$ \delta(q,a,b) = \int_0^\infty f(t) \, dt > \frac12 $$ and from Theorem \ref{DeltaDelta2} that $$ \delta_2(q,a,b) = \int_{-\infty}^{\frac{N(q,b)-N(q,a)}{2\phi(q)}} f(t) \, dt, $$ from which Theorem \ref{thm1} follows.
Theorem \ref{DeltaDelta2} also determines the joint distribution of any vector function \begin{equation}\label{vector2} \frac{u}{e^{u/2}\log u} \( \Delta_2(e^u;q,a_1,b_1), \ldots, \Delta_2(e^u;q,a_r,b_r) \). \end{equation}
\begin{thm}\label{thmjoint} If $f(x_1,\ldots,x_r)$ is the density function of $$ \frac{u}{e^{u/2}} \( \Delta(e^u;q,a_1,b_1), \ldots, \Delta(e^u;q,a_r,b_r) \), $$ then the joint density function of \eqref{vector2} is $$ f\( \frac{N(q,b_1)-N(q,a_1)}{2\phi(q)} - x_1, \ldots, \frac{N(q,b_r)-N(q,a_r)}{2\phi(q)} - x_r \). $$ \end{thm}
\subsection{Origin of Chebyshev's bias}
From an analytic point of view ($L$-functions), the weighted sum \begin{equation}\label{DeltaLambda} \Delta^*(x;q,a,b)=\sum_{\substack{n\leqslant x \\ n\equiv a \bmod{q} }} \Lambda(n) - \sum_{\substack{n\leqslant x \\ n\equiv b \bmod{q} }} \Lambda(n), \end{equation} where $\Lambda$ is the von Mangoldt function, is more natural than \eqref{Delta}. Expressing $\Delta^*(x;q,a,b)$ in terms of sums over zeros of $L$-functions in the standard way (\S 19 of \cite{Da}), we obtain, on ERH$_q$, $$ e^{-u/2} \phi(q) \Delta^*(e^u;q,a,b) = - \sum_{\chi\in C(q)} \( \overline{\chi}(a)-\overline{\chi}(b) \) \sum_{\gamma} \frac{e^{i\gamma u}}{1/2+i\gamma} + O(u^2 e^{-u/2}), $$ where $\ensuremath{\gamma}$ runs over imaginary parts of nontrivial zeros of $L(s,\chi)$ (counted with multiplicity).
Hypothesis GSH$_q$ implies, in particular, that $L(1/2,\chi)\ne 0$. Each summand $e^{i\gamma u}/(1/2+i\gamma)$ is thus a harmonic with mean zero as $u\to\infty$, and GSH$_q$ implies that the harmonics behave independently. Hence, we expect that $e^{-u/2} \phi(q) \Delta^*(e^u;q,a,b)$ will behave like a mean zero random variable. On the other hand, the right side of \eqref{DeltaLambda} contains not only terms corresponding to prime $n$ but terms corresponding to powers of primes. Applying the prime number theorem for arithmetic progressions \eqref{PNT} to the terms $n=p^2$ in \eqref{DeltaLambda} gives $$ \Delta^*(x;q,a,b) = \sum_{\substack{p\leqslant x \\ p\equiv a\bmod{q} }} \log p - \sum_{\substack{p\leqslant x \\ p\equiv b\bmod{q} }} \log p + \frac{x^{1/2}}{\phi(q)} \( N(q,a)-N(q,b) \) + O(x^{1/3}). $$ Hence, on ERH$_q$ and GSH$_q$, we expect the expression \begin{equation}\label{sump} \frac{1}{\sqrt{x}} \Biggl( \sum_{\substack{p\leqslant x \\ p\equiv a\bmod{q} }} \log p - \sum_{\substack{p\leqslant x \\ p\equiv b\bmod{q} }} \log p \Biggr) \end{equation} to behave like a random variable with mean $(N(q,b)-N(q,a))/\phi(q)$. Finally, the distribution of $\Delta(x;q,a,b)$ is obtained from the distribution of \eqref{sump} and partial summation.
\subsection{Analyzing $\Delta_2(x;q,a,b)$}
A natural analog of $\Delta^*(x;q,a,b)$ is \begin{equation}\label{DeltaLambda2} \sum_{\substack{mn\leqslant x \\ mn\equiv a\bmod{q} }} \Lambda(m)\Lambda(n) - \sum_{\substack{mn\leqslant x \\ mn\equiv b\bmod{q} }} \Lambda(m)\Lambda(n). \end{equation} As with $\Delta^*(x;q,a,b)$, the expression in \eqref{DeltaLambda2} can be easily written as a sum over zeros of $L$-functions plus a small error. The main problem now is that the principal summands, namely $\log p_1 \log p_2$ for primes $p_1,p_2$, are very irregular as a function of $p_1 p_2$, and thus estimates for $\Delta_2(x;q,a,b)$ cannot be recovered by partial summation. We get around this problem using a double integration, a method which goes back to Landau \cite[\S 88]{La}. We have \begin{equation}\label{DeltaG} \begin{split} \Delta_2(x;q,a,b) &= \frac{1}{\phi(q)} \sum_{\chi \in C(q)} \( \overline{\chi}(a) - \overline{\chi}(b) \) \sum_{\substack{n=p_1
p_2\leqslant x \\ p_1 \leqslant p_2}} \chi(n) \\ &= \frac{1}{2\phi(q)} \sum_{\chi \in C(q)} \( \overline{\chi}(a) - \overline{\chi}(b) \) \int_0^\infty \int_0^\infty G(x,u,v;\chi) \, du\, dv + O\pfrac{\sqrt{x}}{\log x}, \end{split} \end{equation} where \begin{equation} \label{Gsum} G(x,u,v;\chi) = \sum_{p_1 p_2 \leqslant x} \frac{\chi(p_1 p_2) \log p_1
\log p_2}{p_1^u p_2^v}. \end{equation} The related functions $$ G^*(x,u,v;\chi) = \sum_{mn \leqslant x} \frac{\chi(mn) \Lambda(m)
\Lambda(n)}{m^u n^v} $$ are more ``natural'' from an analytic point of view, being easily expressed in terms of zeros of Dirichlet $L$-functions. By the reasoning of the previous subsection, each $G^*(x,u,v;\chi)$ is expected to be unbiased, the bias in $\Delta_2(x;q,a,b)$ originating from the summands in $G^*(x,u,v;\chi)$ where $m$ is not prime or $n$ is not prime.
\subsection{A heuristic argument for the bias in $\Delta_2(x;q,a,b)$}
We conclude this introduction with a heuristic evaluation of the bias in $\Delta_2(x;q,a,b)$, which originates from the difference between functions $G(x;u,v;\chi)$ and $G^*(x,u,v;\chi)$. For simplicity of exposition, we'll concentrate on the special case $(q,a,b)=(4,3,1)$. In this case, the bias arises from terms $p_1 p_2^2$ and $p_1^2 p_2^2$ which appear in $G^*(x;u,v;\chi)$ but not in $G(x,u,v;\chi)$. Let $\chi$ be the non-principal character modulo 4, so that $$ \frac12 \int_0^\infty \int_0^\infty
\bigl( G^*(x,u,v;\chi) - G(x,u,v;\chi) \bigr)\, du\, dv = \frac12 \sum_{\substack{p_1^a p_2^b \leqslant x \\ \max(a,b)\geqslant 2}} \frac{\chi(p_1^a p_2^b)}{ab}. $$ There are $O(x^{1/2}/\log x)$ terms with $\min(a,b)\geqslant 2$ and $\max(a,b)\geqslant 3$. By the prime number theorem and partial summation, $$ \frac12 \sum_{p_1^2 p_2^2 \leqslant x} \frac14 = \frac18 \sum_{p\leqslant
\sqrt{x}} \pi \( \sqrt{x/p^2} \) \sim \frac{x^{1/2} \log\log x}{2\log x}. $$ Thus, \begin{align*} \Delta_2(x;4,3,1) &= - \frac{1}{2} \sum_{mn \leqslant x} \frac{\chi(mn) \Lambda(m)
\Lambda(n)}{\log m \log n} - \Bigg( \sum_{k=2}^{\infty} \frac{1}{k} \sum_{p_1^k \leqslant x} \chi(p_1^k) \Delta(x/p_1^k;4,3,1) \Bigg) \\ &\qquad +\( \frac12+o(1) \)\frac{x^{1/2} \log\log x}{\log x}. \end{align*} By Theorem RS, $\Delta(y;4,3,1)=y^{1/2}/\log y + E(y)$, where $E(y)$ oscillates with mean 0. Thus, $$ \sum_{k=2}^{\infty} \frac{1}{k} \sum_{p_1^k \leqslant x} \chi(p_1^k) \Delta(x/p_1^k;4,3,1) = \sum_{k=2}^{\infty} \frac{2}{k} \sum_{p_1^k \leqslant x} \chi(p_1^k) \frac{\sqrt{x/p_1^k}}{\log(x/p_1^k)} + E'(x), $$ where $E'(x)$ is expected to oscillate with mean zero. The $k=2$ terms are $$ \sum_{p_1^2 \leqslant x} \frac{\sqrt{x/p_1^2}}{\log(x/p_1^2)} \sim \frac{\sqrt{x} \log\log x}{\log x}, $$ while the terms corresponding to $k\geqslant 3$ contribute $$ \ll \sum_{k=3}^{\infty} \frac{1}{k} \sum_{p_1^k \leqslant x} \frac{\sqrt{x/p_1^k}}{\log(x/p_1^k)} \ll \frac{\sqrt{x}}{\log x}. $$ Thus, we find that \begin{align*} \Delta_2(x;4,3,1) &= - \frac{1}{2} \sum_{mn \leqslant x} \frac{\chi(mn) \Lambda(m)
\Lambda(n)}{\log m \log n} - \( \frac12+o(1) \)\frac{x^{1/2} \log\log x}{\log x} + E'(x). \end{align*}
\subsection{Further problems}
It is natural to consider the distribution, in arithmetic progressions, of numbers composed of exactly $k$ prime factors, where $k\geqslant 3$ is fixed. As with the cases $k=1$ and $k=2$, we expect there to be no bias if we count all numbers $p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}$ with weight $(a_1 \cdots a_k)^{-1}$. If, however, we count terms which are the product of precisely $k$ primes (that is, numbers $p_1^{a_1} \cdots p_j^{a_j}$ with $a_1+\cdots+a_j=k$), then there will be a bias. Hudson has conjectured that the bias will be in the same direction as for primes when $k$ is odd, and in the opposite direction for even $k$. We conjecture that, in addition, the bias becomes less pronounced as $k$ increases.
\section{Preliminaries}
With $\chi$ fixed, the letter $\gamma$, with or without subscripts, denotes the imaginary part of a zero of $L(s,\chi)$ inside the critical strip. In sums over $\ensuremath{\gamma}$, each term appears with its multiplicity $m(\ensuremath{\gamma})$ unless we specify that we sum over distinct $\ensuremath{\gamma}$. Constants implied by $O-$ and $\ll-$symbols depend only on $\chi$ (and hence, on $q$) unless additional dependence is indicated with a subscript. Let $$ A(\chi) = \begin{cases} 1 & \chi^2=\chi_0 \\ 0 & \text{else} \end{cases}, $$ where $\chi_0$ is the principal character modulo $q$. That is, $A(\chi)=1$ if and only if $\chi$ is a real character. For $\chi\in C(q)$, define $$ F(s,\chi)= \sum_{p} \frac{\chi(p) \log p}{p^s}. $$ The following estimates are standard; see e.g. \cite[\S 15,16]{Da}.
\begin{lem} \label{FsHs} Let $\chi\in C(q)$, assume ERH$_q$ and fix $c>\frac13$. Then $F(s,\chi)= -\frac{L'}{L}(s,\chi) + A(\chi) \frac{\zeta'}{\zeta}(2s) + H(s,\chi)$, where $H(s,\chi)$ is analytic and uniformly bounded in the half-plane $\Re s \geqslant c$. \end{lem}
\begin{lem} \label{NTchi} Let $\chi$ be a Dirichlet character modulo $q$. Let $N(T,\chi)$ denote the number of zeros of $L(s, \chi)$ with $0<\Re s<1$ and
$|\Im s|<T$. Then \begin{enumerate} \item $N(T,\chi)=O(T\log(qT))$ for $T\geqslant 1$. \item $N(T,\chi) - N(T-1,\chi) = O(\log(qT))$ for $T\geqslant 1$. \item Uniformly for $s=\sigma+it$ and $\sigma\geqslant -1$, $$ \frac{L'(s,\chi)}{L(s,\chi)} =
\sum_{|\gamma-t|<1} \frac{1}{s-\rho} + O(\log q(|t|+2)). $$ \item $-\frac{\zeta'}{\zeta}(\sigma) = \frac{1}{\sigma -1}+O(1) $ uniformly
for $\sigma\geqslant \frac12$, $\sigma\ne 1$.
\item $\big|\frac{\zeta'}{\zeta}(\sigma+iT) \big| \leqslant
-\frac{\zeta'}{\zeta}(\sigma)$ for $\sigma > 1$. \end{enumerate} \end{lem}
For a suitably small, fixed $\delta>0$, we say that a number $T\geqslant 2$ is \emph{admissible} if for all $\chi\in C(q)\cup \{\chi_0\}$ and all zeros $\frac12+i\ensuremath{\gamma}$ of $L(s,\chi)$,
$|\ensuremath{\gamma}-T| \geqslant \delta (\log T)^{-1}$. By Lemma \ref{NTchi}, we can choose $\ensuremath{\delta}$ small enough, depending on $q$, so that there is an admissible $T$ in $[U,U+1]$ for all $U\geqslant 2$. From Lemma \ref{NTchi} we obtain
\begin{lem}\label{zeta}
Uniformly for $\sigma \geqslant \frac25$ and admissible $T\geqslant 2$, $$
|F(\sigma+iT,\chi)| = O(\log^2 T). $$ \end{lem}
\begin{lem} \label{sumg1g2} Fix $\chi\in C(q)$ and assume $L(\frac12,\chi)\ne 0$. For $A\geqslant 0$ and real $k\geqslant 0$, $$
\sum_{\substack{ |\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \geqslant A \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| \geqslant
1}} \frac{{\log}^k (|\ensuremath{\gamma}_1|+3) {\log}^k (|\ensuremath{\gamma}_2|+3)}{{|\ensuremath{\gamma}_1|}
{|\ensuremath{\gamma}_2|} |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \ll_{k} \frac{(\log (A+3))^{2k+3}}{A+1}. $$ \end{lem}
\begin{proof} The sum in question is at most twice the sum
of terms with $|\ensuremath{\gamma}_2| \geqslant |\ensuremath{\gamma}_1|$, which is $$
\ll \sum_{|\ensuremath{\gamma}_2|\geqslant A}
\frac{{\log}^{2k} (|\ensuremath{\gamma}_2|+3)}{{|\ensuremath{\gamma}_2|}} \bigg(\frac{1}{|\ensuremath{\gamma}_2|}
\sum_{|\ensuremath{\gamma}_1| < \frac{|\ensuremath{\gamma}_2|}{2}} \frac{1}{{|\ensuremath{\gamma}_1|}} +
\frac{1}{{|\ensuremath{\gamma}_2|}} \sum_{\substack{\frac{|\ensuremath{\gamma}_2|}{2} \leqslant |\ensuremath{\gamma}_1| \leqslant
|\ensuremath{\gamma}_2| \\ |\ensuremath{\gamma}_2-\ensuremath{\gamma}_1| \geqslant 1}} \frac{1}{|\ensuremath{\gamma}_2-\ensuremath{\gamma}_1|} \bigg). $$
By Lemma \ref{NTchi} (1), the two sums over $\ensuremath{\gamma}_1$ are $O(\log^2 (|\ensuremath{\gamma}_2|+3))$. A further application of Lemma \ref{NTchi} (1) completes the proof. \end{proof}
We conclude this section with a truncated version of the Perron formula for $G(x,u,v;\chi)$.
\begin{lem}\label{Perron} Uniformly for $x \leqslant T \leqslant 2x^2$,
$x \geqslant 2$, $u\geqslant 0$ and $v\geqslant 0$, we have \begin{equation} \label{Ginterror} G(x,u,v;\chi) = \frac{1}{2\pi i} \int_{c-iT}^{c+iT} F(s+u,\chi)F(s+v,\chi) \frac{x^s}{s} ds + O(\log^3 x), \end{equation} where $c=1+\frac{1}{\log x}$. \end{lem}
\begin{proof} For $\Re s > 1$, we have $$ F(s+u,\chi)F(s+v,\chi) = \sum_{n=1}^\infty f(n) n^{-s}, \qquad f(n)=\sum_{p_1p_2=n} \frac{\chi(p_1 p_2) \log p_1 \log p_2}{p_1^{u} p_2^v}. $$ Using the trivial estimate
$|f(n)| \leqslant \log^2 n$ and a standard argument \cite[\S 17, (3) and (5)]{Da}, we obtain the desired bounds. \end{proof}
\section{Outline of the proof of Theorem \ref{DeltaDelta2}}\label{sec:outline}
Throughout the remainder of this paper, fix $q$, assume ERH$_q$ and that $L(\frac12,\chi) \ne 0$ for each $\chi\in C(q)$. Let $$ \ensuremath{\varepsilon} = \frac{1}{100}. $$ We next define a function $T(x)$ as follows. For each positive integer $n$, let $T_n$ be an admissible value of $T$ satisfying $\exp (2^{n+1}) \leqslant T_n \leqslant \exp(2^{n+1})+1$ and set $T(x)=T_n$ for $\exp(2^n) < x \leqslant \exp (2^{n+1})$. In particular, we have $$ x \leqslant T(x) \leqslant 2x^2 \qquad (x\geqslant e^2). $$
Our first task is to express the double integrals in \eqref{DeltaG} in terms of sums over zeros of $L(s,\chi)$. This is proved in Section \ref{sec:analytic}.
\begin{lem}\label{analytic} Let $\chi\in C(q)$ and let $T=T(x)$. Then \begin{multline*}
x^{-1/2} \int_0^\infty \int_0^\infty G(x,u,v;\chi)\, du\, dv \\
= 2\int_{0}^{2\ensuremath{\varepsilon}} \!\!\! \int_{0}^{2\ensuremath{\varepsilon}} \sum_{|\ensuremath{\gamma}|\leqslant T} \frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma},\chi)x^{-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}} du\, dv + \frac{A(\chi) \log\log x + \Sigma_1(x;\chi)+O(1)}{\log x}, \end{multline*} where
$\int_{1}^{Y} |\Sigma_1 (e^y;\chi)|^2 dy = O(Y)$. \end{lem}
The aggregate of terms $A(\chi)\log\log x/\log x$ account for the bias for products of two primes. As with the Chebyshev bias for primes, these terms arise from poles of $F(s)$ at $s=\frac12$ when $A(\chi)=1$ (see Lemma \ref{FsHs}) and correspond to the contribution to $F(s)$ from squares of primes. The double integral on the right side in Lemma \ref{analytic} is complicated to analyze. In Section \ref{sec:doubleint} we prove the following.
\begin{lem}\label{doubleint} Let $\chi\in C(q)$. Let $n$ be a positive integer, $2^n < \log x \leqslant 2^{n+1}$ and $T=T(x)$. Then \begin{align*}
2 \int_0^{2 \ensuremath{\varepsilon}} \int_0^{2 \ensuremath{\varepsilon}} &\sum_{|\ensuremath{\gamma}| \leqslant T}
\frac{ F(\frac{1}{2}+u-v+i \ensuremath{\gamma},\chi) x^{-v+i \ensuremath{\gamma}}}{\frac{1}{2}-v+i \ensuremath{\gamma}}
du\ dv = \frac{\Sigma_2(x;\chi)}{\log x}\\
& + 2\sum_{\substack{|\ensuremath{\gamma}| \leqslant T \\ \ensuremath{\gamma} \text{ distinct}}} m^2(\ensuremath{\gamma}) x^{i\ensuremath{\gamma}} (\frac{1}{2}+i\ensuremath{\gamma}) \int_0^{2 \ensuremath{\varepsilon}-2^{-n}} \frac{ x^{-v}}{\frac{1}{2}-v+i \ensuremath{\gamma}} \int_{v+2^{-n}}^{2 \ensuremath{\varepsilon}} \frac{du}{(u-v)(\frac{1}{2}-u+i \ensuremath{\gamma})} dv, \end{align*}
where $\int_{1}^{Y} |\Sigma_2(e^y;\chi)|^2 dy = o(Y\log^2 Y)$. \end{lem}
The terms on the right in Lemma \ref{doubleint} with small $|\ensuremath{\gamma}|$
will give the main term, and terms with larger $|\ensuremath{\gamma}|$ are considered as error terms. The next lemma is proved in Section \ref{sec:TT0}.
\begin{lem}\label{TT0} Let $\chi\in C(q)$. Let $n$ be a positive integer, $2^n < \log x \leqslant 2^{n+1}$, $T=T(x)$ and $2\leqslant T_0 \leqslant T$. Then \begin{align*}
&2\sum_{\substack{|\ensuremath{\gamma}| \leqslant T \\ \ensuremath{\gamma} \text{ distinct}}} m^2(\ensuremath{\gamma}) x^{i\ensuremath{\gamma}} (\frac{1}{2}+i\ensuremath{\gamma}) \int_0^{2 \ensuremath{\varepsilon}-2^{-n}} \frac{ x^{-v}}{\frac{1}{2}-v+i \ensuremath{\gamma}}
\int_{v+2^{-n}}^{2 \ensuremath{\varepsilon}} \frac{du}{(u-v)(\frac{1}{2}-u+i \ensuremath{\gamma})} dv \\
&\qquad = \frac{2\log\log x}{\log x} \sum_{\substack{|\gamma|\leqslant T_0 \\ \ensuremath{\gamma} \text{ distinct}}}
\frac{m^2(\gamma) x^{i\gamma}}{1/2+i\ensuremath{\gamma}} + O\pfrac{\log^3 T_0}{\log x} + \frac{\Sigma_3(x,T_0;\chi)}{\log x}, \end{align*} where $$
\frac{1}{Y}\int_1^Y |\Sigma_3(e^y,T_0;\chi)|^2\, dy \ll \frac{\log^5 T_0}{T_0} \log^2 Y. $$ \end{lem}
Combining Lemmas \ref{analytic}, \ref{doubleint} and \ref{TT0} with \eqref{DeltaG} yields (for fixed, large $T_0$) \begin{multline*} \Delta_2(x;q,a,b) = \frac{\sqrt{x}}{2\phi(q)} \sum_{\chi\in C(q)} \( \overline{\chi}(a)-\overline{\chi}(b) \) \Bigg[
\frac{\log\log x}{\log x} \Bigg( A(\chi)+2 \sum_{\substack{|\ensuremath{\gamma}|\leqslant T_0 \\ \ensuremath{\gamma} \text{ distinct}}} \frac{m^2(\ensuremath{\gamma}) x^{i\ensuremath{\gamma}}}{1/2+i\ensuremath{\gamma}}\Bigg) \\ + \frac{\Sigma_1(x;\chi)+\Sigma_2(x;\chi)+\Sigma_3(x,T_0;\chi) +O(\log^3 T_0)}{\log x} \Bigg], \end{multline*} where $$ \lim_{T_0\to \infty} \(\limsup_{Y\to \infty}
\frac{1}{Y\log^2 Y} \sum_{\chi\in C(q)} \int_{1}^Y |\Sigma_1(e^y;\chi)+
\Sigma_2(e^y;\chi)+\Sigma_3(e^y;T_0;\chi)|^2\, dy \) =0. $$ On the other hand (cf. \cite{RS}), $$ \Delta(x;q,a,b) = \frac{\sqrt{x}}{\log x} \( \frac{N(q,b)-N(q,a)}{\phi(q)} - \sum_{\chi\in C(q)} \( \overline{\chi}(a)-\overline{\chi}(b) \)
\sum_{|\ensuremath{\gamma}| \leqslant T_0}\frac{x^{i\ensuremath{\gamma}}}{1/2+i\ensuremath{\gamma}} + \Sigma_4(x;T_0) \), $$ where $$
\lim_{T_0\to \infty} \(\limsup_{Y\to\infty} Y^{-1} \int_{1}^Y |\Sigma_4(e^y;T_0)|^2\, dy \) =0. $$ Now assume $m(\ensuremath{\gamma})=1$ for all $\ensuremath{\gamma}$, and note that $$
\sum_{\chi\in C(q)} \( \overline{\chi}(a)-\overline{\chi}(b) \)
A(\chi) = N(q,a)-N(q,b). $$ Letting $T_0\to \infty$ finishes the proof of Theorem \ref{DeltaDelta2}.
\section{proof of Lemma \ref{analytic}}\label{sec:analytic}
Assume ERH$_q$ throughout. We first estimate $G(x,u,v;\chi)$ for different ranges of $u,v$.
\begin{lem}\label{Gbounds} Let $\chi\in C(q)$, $\chi\ne \chi_0$. For $x\geqslant 4$, the following hold: \begin{enumerate}
\item For $u,v \geqslant \ensuremath{\varepsilon}$, $G(x,u,v;\chi) \ll x^{\frac{1}{2} - \frac {\ensuremath{\varepsilon}}{2}} \log^5 x$.
\item For $u \geqslant 2\ensuremath{\varepsilon}$, $v \leqslant \ensuremath{\varepsilon}$ and $T=T(x)$, \begin{align*}
x^{-1/2} G(x,u,v;\chi) &= \sum_{|\ensuremath{\gamma}|\leqslant T} \frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma}
,\chi)x^{-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}} -A(\chi) \frac{F(\frac{1}{2}+u-v,\chi)x^{-v}}{1-2v} \\ &\qquad\qquad + O(x^{-\frac{3\ensuremath{\varepsilon}}{2}} \log^5 x). \end{align*}
\item For $u\leqslant 2\ensuremath{\varepsilon}$, $v\leqslant 2\ensuremath{\varepsilon}$, $u\ne v$ and $T=T(x)$, \begin{multline*}
x^{-1/2} G(x,u,v;\chi) = \sum_{|\ensuremath{\gamma}| \leqslant T}
\frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma}
,\chi)x^{-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}} +
\frac{F(\frac{1}{2}-u+v+i\ensuremath{\gamma}
,\chi)x^{-u+i\ensuremath{\gamma}}}{\frac{1}{2}-u+i\ensuremath{\gamma}} \\ \qquad -
A(\chi) \( \frac{F(\frac{1}{2}+u-v,\chi)x^{-v}}{1-2v}+
\frac{F(\frac{1}{2}-u+v,\chi)x^{-u}}{1-2u} \)
+ O (x^{-3\ensuremath{\varepsilon}} \log^5 x). \end{multline*} \end{enumerate} \end{lem}
\begin{proof} Assume $u\geqslant \ensuremath{\varepsilon}$ and $v\geqslant \ensuremath{\varepsilon}$. Start with the approximation of $G(x,u,v;\chi)$ given by Lemma \ref{Perron}, then deform the segment of integration to the contour consisting of three straight segments connecting $c-iT$, $b-iT$, $b+iT$ and $c+iT$, where $b=\frac12 - \frac{\ensuremath{\varepsilon}}{2}$
and $T=T(x)$. The rectangle formed by the new and old contours does not contain any poles of $F(s+u,\chi)F(s+v,\chi)s^{-1}$. On the three new segments, by Lemmas \ref{FsHs}, \ref{NTchi} and \ref{zeta}, we have $|F(s+u,\chi)F(s+v,\chi)| \ll \log^4 T$. Hence the integral of $F(s+u,\chi)F(s+v,\chi) x^s s^{-1}$ over the three segments is $$
\ll (\log^4 x) \Bigl( \int_b^c \frac{x^\sigma}{|\sigma+iT|}\, d\sigma
+ \int_{-T}^T \frac{x^b}{|b+it|}\, dt \Bigr) \ll x^b \log^5 x. $$ This proves (1).
We now consider the case $v \leqslant \ensuremath{\varepsilon}$ and $u \geqslant 2\ensuremath{\varepsilon}$. We set $b = \frac{1}{2}-\frac{3\ensuremath{\varepsilon}}{2}$ and deform the contour of integration as in the previous case. Since $u+b \geqslant \frac12 + \frac{\ensuremath{\varepsilon}}{2}$ and $v+b \leqslant \frac12 - \frac{\ensuremath{\varepsilon}}{2}$, we have by Lemma \ref{zeta} that
$|F(s+u,\chi)F(s+v,\chi)| \ll \log^4 T \ll \log^4 x$ on all three new segments. As in the proof of (1), the integral over the new contour is $\ll x^b \log^5 x$. We pick up residue terms from poles of $F(s+v,\chi)$ inside the rectangle coming from the nontrivial zeros of $L(s,\chi)$, plus a pole at $s=\frac12-v$ from the $\frac{\zeta'}{\zeta}(2s+2v)$ term if $\chi^2=\chi_0$. The sum of the residues is $$
\sum_{|\ensuremath{\gamma}|\leqslant T} \frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma}
,\chi)x^{\frac{1}{2}-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}} -A(\chi) \frac{F(\frac{1}{2}+u-v,\chi)x^{\frac{1}{2}-v}}{1-2v}, $$ and (2) follows.
Finally, consider the case $0\leqslant u,v \leqslant 2\ensuremath{\varepsilon}$. Let $b = \frac{1}{2} - 3\ensuremath{\varepsilon}$ and deform the contour as in the previous cases. As before, the integral over the new contour is $O(x^b \log^5 x)$. This time, we pick up residues from poles of both $F(s+u,\chi)$ and $F(s+v,\chi)$. The sum of the residues is \begin{align*}
\sum_{|\ensuremath{\gamma}|\leqslant T} \Bigl( \frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma},\chi) x^{\frac{1}{2}-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}}+ \frac{F(\frac{1}{2}-u+v+i\ensuremath{\gamma}
,\chi)x^{\frac{1}{2}-u+i\ensuremath{\gamma}}}{\frac{1}{2}-u+i\ensuremath{\gamma}} \Bigr) \\ - A(\chi) \( \frac{F(\frac{1}{2}+u-v,\chi)x^{\frac{1}{2}-v}}{1-2v}+ \frac{F(\frac{1}{2}-u+v,\chi)x^{\frac{1}{2}-u}}{1-2u} \), \end{align*} and (3) follows. \end{proof}
\begin{proof}[Proof of Lemma \ref{analytic}] Begin with $$
\int_0^\infty \int_0^\infty G(x,u,v;\chi)\, du\, dv = I_1 + I_2 +
2I_3 + I_4, $$ where $I_1$ is the integral over $\max(u,v)\geqslant \log x$, $I_2$ is the integral over $2\ensuremath{\varepsilon} \leqslant \max(u,v) \leqslant \log x$ and $\min(u,v)\geqslant \ensuremath{\varepsilon}$, $I_3$ is the integral over $0\leqslant v\leqslant \ensuremath{\varepsilon}$, $2\ensuremath{\varepsilon} \leqslant u \leqslant \log x$, and $I_4$ is the integral over $0\leqslant u,v \leqslant 2\ensuremath{\varepsilon}$. For $\max(u,v)\geqslant \log x$, $$
|G(x,u,v;\chi)| \leqslant \sum_{p\leqslant x} \frac{\log p}{p^u} \sum_{p\leqslant x} \frac{\log q}{q^v} \ll \frac{x}{2^{\max(u,v)}}, $$ whence $I_1 \ll x^{1-\log 2}$. By Lemma \ref{Gbounds} (1), $I_2 \ll x^{1/2-\ensuremath{\varepsilon}/2} \log^7 x$.
By Lemma \ref{Gbounds} (2), \begin{equation}\label{I3} \begin{split} I_3 &= x^{1/2} \int_0^\ensuremath{\varepsilon} \int_{2\ensuremath{\varepsilon}}^{\log x}
\sum_{|\ensuremath{\gamma}|\leqslant T} \frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma}
,\chi)x^{-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}} -A(\chi) \frac{F(\frac{1}{2}+u-v,\chi)x^{-v}}{1-2v}\, du\, dv \\ &\qquad\qquad + O(x^{1/2-\frac{3\ensuremath{\varepsilon}}{2}} \log^6 x). \end{split} \end{equation} By Lemmas \ref{NTchi} and \ref{zeta}, \begin{equation}\label{I31} \int_{0}^{\ensuremath{\varepsilon}} \int_{2\ensuremath{\varepsilon}}^{\log x} \frac{F(\frac{1}{2}+u-v
,\chi)x^{-v}}{1-2v} du\ dv \ll \int_{0}^{\ensuremath{\varepsilon}} x^{-v}\, dv \ll \frac{1}{\log x}. \end{equation} Let $$ \Sigma_1(x) = (\log x) \int_0^\ensuremath{\varepsilon} \int_{2\ensuremath{\varepsilon}}^{\log x}
\sum_{0<|\ensuremath{\gamma}|<T} \frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma}
,\chi)x^{-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}}\, du\, dv. $$ Since $\frac{1}{2}+u-v \geqslant \frac{1}{2}+\ensuremath{\varepsilon}$ for $0 \leqslant v \leqslant \ensuremath{\varepsilon}$ and $2 \ensuremath{\varepsilon} \leqslant u \leqslant \log x$, by Lemmas \ref{FsHs}, \ref{NTchi}, and \ref{zeta}, $$ F(\frac{1}{2}+u-v+i\ensuremath{\gamma},\chi) = -\frac{L'}{L}(\frac{1}{2}+u-v+i\ensuremath{\gamma},
\chi) + O(1) \ll \log(|\ensuremath{\gamma}|+3). $$ We also have $F(1/2+u-v+i\ensuremath{\gamma},\chi) \ll 2^{-u}$ for $u\geqslant 2$. Thus, for positive integers $n$, \begin{align*}
\int_{2^n}^{2^{n+1}} |\Sigma_1 (e^y)|^2 dy &\ll 2^{2n}
\sum_{|\ensuremath{\gamma}_1|,|\ensuremath{\gamma}_2|\leqslant T} \frac{\log (|\ensuremath{\gamma}_1|+3)
\log(|\ensuremath{\gamma}_2|+3)}{|\ensuremath{\gamma}_1 \ensuremath{\gamma}_2|} \\ & \qquad\qquad \times \int_{0}^{\ensuremath{\varepsilon}}
\int_{0}^{\ensuremath{\varepsilon}} \Bigg| \int_{2^n}^{2^{n+1}}
e^{y(-v_1+i\ensuremath{\gamma}_1-v_2-i\ensuremath{\gamma}_2)} dy \Bigg| dv_1 dv_2. \end{align*}
The summands with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1$ contribute, by Lemma \ref{NTchi}, \begin{align*}
&\ll 2^{2n} \sum_{\substack{|\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \leqslant T
\\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1}} \frac{\log (|\ensuremath{\gamma}_1|+3) \log
(|\ensuremath{\gamma}_2|+3)}{|\ensuremath{\gamma}_1||\ensuremath{\gamma}_2|} \int_{2^n}^{2^{n+1}}
\Big(\int_{0}^{\ensuremath{\varepsilon}} e^{-vy} dv\Big)^2 dy \\
&\ll 2^n \sum_{|\ensuremath{\gamma}|\leqslant T } \frac{{\log}^3
(|\ensuremath{\gamma}|+3)}{|\ensuremath{\gamma}|^2} \ll 2^n. \end{align*}
The summands with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| \geqslant 1$ contribute, by Lemma \ref{sumg1g2}, \[
\ll \sum_{\substack{|\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| < T \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|
\geqslant 1}} \frac{2^{2n} \log (|\ensuremath{\gamma}_1|+3) \log
(|\ensuremath{\gamma}_2|+3)}{|\ensuremath{\gamma}_1||\ensuremath{\gamma}_2||\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \Big(\int_{0}^{\ensuremath{\varepsilon}} e^{-v2^n} dv \Big)^2
\ll 1. \]
Thus, $\int_{2^n}^{2^{n+1}} |\Sigma_{1} (e^y)|^2 dy = O (2^n)$. Summing over $n\leqslant \frac{\log Y}{\log 2}+1$ yields
$\int_1^Y |\Sigma_1(e^y)|^2\, dy = O(Y)$.
Finally, using Lemma \ref{Gbounds} (3) gives \begin{equation}\label{I4} \begin{split} I_4 &= x^{1/2} \int_0^{2\ensuremath{\varepsilon}} \int_0^{2\ensuremath{\varepsilon}}
\sum_{|\ensuremath{\gamma}|\leqslant T}
\frac{F(\frac{1}{2}+u-v+i\ensuremath{\gamma}
,\chi)x^{-v+i\ensuremath{\gamma}}}{\frac{1}{2}-v+i\ensuremath{\gamma}} +
\frac{F(\frac{1}{2}-u+v+i\ensuremath{\gamma}
,\chi)x^{-u+i\ensuremath{\gamma}}}{\frac{1}{2}-u+i\ensuremath{\gamma}} \\ & \qquad -
A(\chi) \( \frac{F(\frac{1}{2}+u-v,\chi)x^{-v}}{1-2v}+
\frac{F(\frac{1}{2}-u+v,\chi)x^{-u}}{1-2u} \) \, du\, dv
+ O (x^{\frac12-3\ensuremath{\varepsilon}} \log^3 x). \end{split} \end{equation} Now assume $\chi^2=\chi_0$. We will show that \begin{equation}\label{I41} - \int_0^{2\ensuremath{\varepsilon}} \int_0^{2\ensuremath{\varepsilon}}
\frac{F(\frac{1}{2}+u-v,\chi)x^{-v}}{1-2v}+
\frac{F(\frac{1}{2}-u+v,\chi)x^{-u}}{1-2u} \, du\, dv = \frac{\log\log x + O(1)}{\log x}. \end{equation} Together with \eqref{I3}, \eqref{I31} and \eqref{I4}, this completes the proof of Lemma \ref{analytic}.
Note that $F(\frac{1}{2}+w)=-\frac{1}{2w}+O(1)$ by Lemmas \ref{FsHs} and \ref{zeta}. Replacing $x$ with $e^y$, the left side of \eqref{I41} is $$ =\frac12 \int_{0}^{2\ensuremath{\varepsilon}} \int_{0}^{2\ensuremath{\varepsilon}}
\frac{e^{-yv}}{(u-v)(1-2v)} +
\frac{e^{-yu}}{(v-u)(1-2u)}du\ dv + O\Big(\int_{0}^{2\ensuremath{\varepsilon}}
\int_{0}^{2\ensuremath{\varepsilon}} e^{-yv} du\ dv\Big). $$
The error term above is $O(1/y)$. In the main term, when $|u-v|<1/y$, the integrand is $O(y e^{-vy})$ and the corresponding part of the double integral is $O(1/y)$. When $u \geqslant v + 1/y$, the integrand is $$ \frac{e^{-vy}}{u-v} + O\pfrac{v e^{-vy}+e^{-uy}}{u-v} $$ and the corresponding part of the double integral is $$ \int_0^{2\ensuremath{\varepsilon}} e^{-vy} \log\pfrac{y}{2\ensuremath{\varepsilon}-v}\, dv + O\pfrac{1}{y} = \frac{\log y + O(1)}{y}. $$ The contribution from $u\leqslant v-1/y$ is, by symmetry, also $\frac{\log y+O(1)}{y}$. The asymptotic \eqref{I41} follows. \end{proof}
\section{Proof of Lemma \ref{doubleint}}\label{sec:doubleint}
\begin{lem}\label{Qint}
Uniformly for $y\geqslant 1$, $0 < |\xi| \leqslant 1$, $|w| \geqslant \frac{1}{2}$
and $a\geqslant 0$ we have $$
\Biggl| \int_0^{2 \ensuremath{\varepsilon}} \int_0^{2 \ensuremath{\varepsilon}} \frac{v^a e^{-vy}}{(u-v+i\xi)(w-v)}
du\ dv \Biggr| \ll \frac{(4\ensuremath{\varepsilon})^{a}\log \min(2y, \frac{2}{|\xi|})}{y|w|}. $$ \end{lem}
\begin{proof} Let $I$ denote the double integral in the Lemma. If $|\xi| \geqslant \frac{1}{y}$, then \begin{align*}
I&\ll \frac{1}{|w|} \int_0^{2 \ensuremath{\varepsilon}} v^a e^{-vy} \int_0^{2\ensuremath{\varepsilon}}
\min\( \frac{1}{|u-v|}, \frac{1}{|\xi|} \)\, du\, dv \\
&\ll \frac{(2\ensuremath{\varepsilon})^a}{|w|} \(1 + \log \frac{2}{|\xi|}\)
\int_0^{2 \ensuremath{\varepsilon}} e^{-vy}\, dv
\ll \frac{(2\ensuremath{\varepsilon})^a \log({\frac{2}{|\xi|}})}{y|w|}. \end{align*}
If $|\xi| < \frac{1}{y}$, let $I=I_1+I_2+I_3$, where $I_1$ is the part of
$I$ coming from $|u-v| \leqslant |\xi|$, $I_2$ is the part of $I$ coming from
$|\xi| < |u-v| \leqslant \frac{1}{y}$, and $I_3$ is the part of $I$ coming
from $|u-v|>\frac{1}{y}$. We have $$
I_1 \ll \frac{1}{|w\xi|} \; \;\; \iint
\limits_{\substack{0\leqslant u,v\leqslant 2\ensuremath{\varepsilon} \\ |u-v|\leqslant |\xi|}}
v^a e^{-vy} du\ dv \ll \frac{(2\ensuremath{\varepsilon})^a}{y|w|}. $$ and $$
I_3 \ll \frac{(2\ensuremath{\varepsilon})^a}{|w|}\;\; \iint
\limits_{\substack{0\leqslant u,v \leqslant 2\ensuremath{\varepsilon} \\ |u-v|\geqslant \frac{1}{y}}}
\frac{e^{-vy}}{|u-v|} du\ dv
\ll \frac{(2\ensuremath{\varepsilon})^a}{|w|} \int_0^{2 \ensuremath{\varepsilon}} e^{-vy}(\log y + 1) dv
\ll \frac{(2\ensuremath{\varepsilon})^a \log(2y)}{y|w|}. $$ By symmetry, $$
I_2 = \frac12 \iint\limits_{|\xi| < |u-v| \leqslant 1/y} \frac{v^a
e^{-vy}}{(u-v+i\xi)(w-v)} + \frac{u^a e^{-uy}}{(v-u+i\xi)(w-u)}\,
du\, dv. $$
Since, $|u^a-v^a|\leqslant a|u-v| (2\ensuremath{\varepsilon})^{a-1}$, \begin{equation}\label{euy}\begin{split} u^a e^{-uy} - v^a e^{-vy} &= e^{-vy} v^a \( e^{(v-u)y}-1 \) +
e^{-vy} (u^a-v^a) e^{(v-u)y} \\&\ll e^{-vy} y |u-v| (4\ensuremath{\varepsilon})^a. \end{split}\end{equation} We deduce that \begin{align*}
I_2&= \!\!\!\iint\limits_{\substack{0\leqslant u,v\leqslant 2\ensuremath{\varepsilon} \\ |\xi| < |u-v| \leqslant 1/y}}
\!\! \frac{(w-u)(u-v)(u^ae^{-uy}-v^ae^{-vy})+u^ae^{-uy}(u-v)^2+O(|\xi w| (2\ensuremath{\varepsilon})^a
e^{-vy})}{2(u-v+i\xi)(v-u+i\xi)(w-u)(w-v)} du\ dv \\
&\ll \frac{(4\ensuremath{\varepsilon})^a}{|w|}\;\;\;
\iint\limits_{\substack{0\leqslant u,v\leqslant 2\ensuremath{\varepsilon} \\ |\xi|<|u-v| \leqslant 1/y}}
y e^{-vy} + \frac{|\xi| e^{-vy}}{|u-v|^2} \, du\, dv
\ll \frac{(4\ensuremath{\varepsilon})^a}{y|w|}. \end{align*} \end{proof}
\begin{proof}[Proof of Lemma \ref{doubleint}] Let $y=\log x$. We first note by Lemmas \ref{FsHs} and \ref{NTchi}, $$ F(\frac{1}{2}+u-v +i \ensuremath{\gamma},\chi) = \frac{m(\ensuremath{\gamma})}{u-v}+R(\ensuremath{\gamma},u-v)+R'(\ensuremath{\gamma},u-v), $$ where $$
R(\ensuremath{\gamma},w) = \sum_{0< |\ensuremath{\gamma}'-\ensuremath{\gamma}| \leqslant 1} \frac{1}{w+i(\ensuremath{\gamma} - \ensuremath{\gamma}')}, \qquad
R'(\ensuremath{\gamma},u-v) = O(\log (|\ensuremath{\gamma}|+3)). $$ Then, the double integral in Lemma \ref{doubleint} is $$
=\sum_{i=1}^4 \Sigma_{2,i}(y)+2 \sum_{\substack{|\ensuremath{\gamma}| \leqslant T \\ \ensuremath{\gamma} \text{ distinct}}} m^2(\ensuremath{\gamma}) e^{iy\ensuremath{\gamma}} (\frac{1}{2}+i\ensuremath{\gamma}) \int_0^{2 \ensuremath{\varepsilon}-2^{-n}} \frac{ e^{-yv}}{\frac{1}{2}-v+i \ensuremath{\gamma}} \int_{v+2^{-n}}^{2 \ensuremath{\varepsilon}} \frac{du}{(u-v)(\frac{1}{2}-u+i \ensuremath{\gamma})} dv, $$ where \begin{align*}
\Sigma_{2,1}(y) &= 2 \int_0^{2 \ensuremath{\varepsilon}} \int_0^{2 \ensuremath{\varepsilon}} \sum_{|\ensuremath{\gamma}| \leqslant T} \frac{ R(\ensuremath{\gamma},u-v) e^{y(-v+i \ensuremath{\gamma})}}{\frac{1}{2}-v+i \ensuremath{\gamma}} du\ dv, \\ \Sigma_{2,2}(y) &= 2 \int_0^{2 \ensuremath{\varepsilon}} \int_0^{2 \ensuremath{\varepsilon}} \frac{ R'(\ensuremath{\gamma},u-v) e^{y(-v+i \ensuremath{\gamma})}}{\frac{1}{2}-v+i \ensuremath{\gamma}}\, du\, dv, \\
\Sigma_{2,3}(y)& = \sum_{\substack{|\ensuremath{\gamma}| \leqslant T \\ \ensuremath{\gamma} \text{ distinct}}} m^2(\ensuremath{\gamma}) e^{iy \ensuremath{\gamma}} (\frac{1}{2} + i\ensuremath{\gamma})
\iint \limits_{\substack{0 \leqslant u,v \leqslant 2 \ensuremath{\varepsilon} \\ |u-v| \leqslant 2^{-n}}} \frac{ e^{-yv}-e^{-uy}}{(u-v)(\frac{1}{2}-v+i \ensuremath{\gamma})(\frac{1}{2}-u+i \ensuremath{\gamma})} dv\ du, \\
\Sigma_{2,4}(y) &= 2 \sum_{\substack{|\ensuremath{\gamma}| \leqslant T \\ \ensuremath{\gamma} \text{ distinct}}} m^2(\ensuremath{\gamma}) e^{iy \ensuremath{\gamma}} (\frac{1}{2} + i\ensuremath{\gamma})
\int_{2^{-n}}^{2 \ensuremath{\varepsilon}} \int_{0}^{v-2^{-n}} \frac{ e^{-yv}}{(u-v)(\frac{1}{2}-v+i \ensuremath{\gamma})(\frac{1}{2}-u+i \ensuremath{\gamma})} du\ dv. \end{align*} We show that $\sum_{j=1}^4 \Sigma_{2,j}(y)$ is small in mean square. Note that for $2^n < y \leqslant 2^{n+1}$, $T=T(e^y)$ is constant. Also, by Lemma \ref{NTchi}, we have \begin{equation}\label{mgam}
m(\ensuremath{\gamma}) \ll \log (|\ensuremath{\gamma}|+3). \end{equation}
First, by Lemmas \ref{NTchi} and \ref{sumg1g2}, \begin{equation}\label{Sigma22} \begin{split}
\int_{2^n}^{2^{n+1}} \!\! &|\Sigma_{2,2}(y)|^2\, dy = 4
\iiiint\limits_{[0,2\ensuremath{\varepsilon}]^4} \!\!\! \sum_{\substack{|\ensuremath{\gamma}_1|\leqslant T \\ |\ensuremath{\gamma}_2| \leqslant T}} \frac{R'(\ensuremath{\gamma}_1,u_1-v_1)\overline{R'(\ensuremath{\gamma}_2,u_2-v_2)}} {(\frac12-v_1+i\ensuremath{\gamma}_1)(\frac12-v_2-i\ensuremath{\gamma}_2)} \\ &\qquad\qquad \times \int_{2^n}^{2^{n+1}} \!\!\!\! e^{y(-v_1-v_2+i\ensuremath{\gamma}_1-i\ensuremath{\gamma}_2)}\, dy \, du_j dv_j \\
&\ll \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|>1} \frac{\log (|\ensuremath{\gamma}_1|+3) \log (|\ensuremath{\gamma}_2|+3)}
{|\ensuremath{\gamma}_1\ensuremath{\gamma}_2|\cdot|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \iiiint\limits_{[0,2\ensuremath{\varepsilon}]^4}
e^{-2^n(v_1+v_2)}\, du_j dv_j \\
&\qquad + \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|\leqslant 1} \frac{\log (|\ensuremath{\gamma}_1|+3) \log (|\ensuremath{\gamma}_2|+3)}
{|\ensuremath{\gamma}_1\ensuremath{\gamma}_2|} \int_{2^n}^{2^{n+1}}\iiiint\limits_{[0,2\ensuremath{\varepsilon}]^4} e^{-y(v_1+v_2)}\, du_j dv_j dy \\ &\ll 2^{-n}. \end{split} \end{equation}
For the remaining sums, for brevity we define $$ \rho_1 = \frac12 + i\ensuremath{\gamma}_1, \qquad \rho_2 = \frac12 - i\ensuremath{\gamma}_2. $$ Next, \begin{multline*}
\int_{2^n}^{2^{n+1}} |\Sigma_{2,3}(y)|^2 dy = \int_{2^{n}}^{2^{n+1}}
\sum_{|\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \leqslant T} m(\ensuremath{\gamma}_1) m(\ensuremath{\gamma}_2) e^{i y (\ensuremath{\gamma}_1 - \ensuremath{\gamma}_2)} \rho_1 \rho_2 \\ \times
\iiiint \limits_{\substack{[0,2\ensuremath{\varepsilon}]^4 \\ |u_j-v_j| \leqslant 2^{-n}}}\!\!\!
\frac{(e^{-v_1y}-e^{-u_1y})(e^{-v_2 y}-e^{-u_2 y})} {\prod_{j=1}^2 (u_j-v_j)(\rho_j-v_j)(\rho_j-u_j)}
dv_j dv_j\, dy. \end{multline*} By \eqref{euy}, the integrand in the quadruple integral is
$\ll y^2 e^{-uy-u_1 y}|\rho_1 \rho_2|^{-2}.$
By Lemma \ref{NTchi}, for a given $\ensuremath{\gamma}_1$, there are $\ll \log (|\ensuremath{\gamma}_1|+3)$ zeros $\ensuremath{\gamma}_2$ with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1$. Hence, the contribution from terms with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1$ is $$
\ll 2^{-n} \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1} \frac{m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}{|\rho_1 \rho_2|}
\ll 2^{-n} \sum_{\ensuremath{\gamma}_1} \frac{\log^3 (|\ensuremath{\gamma}_1|+3)}{|\ensuremath{\gamma}_1|^2} \ll 2^{-n}. $$ Using integration by parts, we have $$ \int_{2^n}^{2^{n+1}} e^{iy(\ensuremath{\gamma}_1-\ensuremath{\gamma}_2)} (e^{-v_1y}-e^{-u_2y})(e^{-v_1y}-e^{-u_2y})\, dy
\ll \frac{2^{3n} |u_1-v_1|\, |u_2-v_2| e^{-2^n(u_1+u_2)}}{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} $$
uniformly in $u_1,v_1,u_2,v_2$. Thus, by \eqref{mgam} and Lemma \ref{sumg1g2}, the contribution from terms with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| \geqslant 1$ is $$
\ll 2^{-n} \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| \geqslant 1} \frac{m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}
{|\rho_1 \rho_2| \cdot |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \ll 2^{-n}. $$ Combining these estimates, we have \begin{equation} \label{Sigma23}
\int_{2^n}^{2^{n+1}} |\Sigma_{2,3}(y)|^2 dy \ll 2^{-n}. \end{equation}
In the same manner, we have $$
\int_{2^n}^{2^{n+1}} \!\!\! |\Sigma_{2,4}(y)|^2 dy\ = \!\!
\sum_{\substack{|\ensuremath{\gamma}_1|\leqslant T \\ |\ensuremath{\gamma}_2| \leqslant T}} \!\! m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2) \rho_1 \rho_2 \int_{2^{n}}^{2^{n+1}} \!\!\!\! \iiiint\limits_{\substack{[0,2\ensuremath{\varepsilon}]^4 \\ u_j \leqslant v_j - 2^{-n}}} \frac{ e^{y(-v_1-v_2+i(\ensuremath{\gamma}_1-\ensuremath{\gamma}_2))}du_j dv_j}{\prod_{j=1}^2 (u_j-v_j)(\rho_j-v_j)(\rho_j-u_j)} dy. $$ The contribution to the right side from terms with
$|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|<1$ is \begin{align*}
&\ll \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1}
\frac{m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}{|\ensuremath{\gamma}_1\ensuremath{\gamma}_2|} \int_{2^{n}}^{2^{n+1}} \Biggl( \int_{2^{-n}}^{2 \ensuremath{\varepsilon}} \int_{0}^{v-2^{-n}} \frac{e^{-yv}}{(v-u)}\, du\, dv \Biggr)^2 \\
&\ll \sum_{\ensuremath{\gamma}_1} \frac{\log^3 (|\ensuremath{\gamma}_1|+3)} {|\ensuremath{\gamma}_1|^2} \int_{2^{n}}^{2^{n+1}} \Biggl( \int_{1/y}^{\infty} e^{-yv} \log(y v)\, dv \Biggr)^2 \ll 2^{-n}. \end{align*}
The terms with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|>1$ contribute \begin{align*}
&\ll \sum_{\substack{|\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| < T \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| > 1}} \frac{m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}
{|\ensuremath{\gamma}_1\ensuremath{\gamma}_2| \cdot |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \Biggl( \int_{2^{-n}}^{2 \ensuremath{\varepsilon}} \int_{0}^{v-2^{-n}} \frac{ e^{-2^nv}}{v-u}\, du\ dv \Biggr)^2 \\
&\ll \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| > 1} \frac{\log (|\ensuremath{\gamma}_1|+3) \log (|\ensuremath{\gamma}_2|+3)}
{|\ensuremath{\gamma}_1\ensuremath{\gamma}_2| \cdot |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| } \pfrac{1}{2^n}^2 \ll \frac{1}{2^{2n}}. \end{align*} Therefore, \begin{equation} \label{Sigma24}
\int_{2^n}^{2^{n+1}} |\Sigma_{2,4}(y)|^2 dy \ll 2^{-n}. \end{equation}
Estimating an average of $\Sigma_{2,1}(y)$ is more complicated, since
$R(\ensuremath{\gamma},w)$ could be very large if $|w|$ is small and there is another
$\ensuremath{\gamma}'$ very close to $\ensuremath{\gamma}$. We get around the problem by noticing that $R(\ensuremath{\gamma},w)+R(\ensuremath{\gamma},-w)$ is always small. We first have, by \eqref{euy} and Lemma \ref{NTchi}, \begin{multline}\label{Sig211}
\int_{2^n}^{2^{n+1}} |\Sigma_{2,1}(y)|^2 dy \ll
\sum_{\ensuremath{\gamma}_1,\ensuremath{\gamma}_2} \log^2(|\ensuremath{\gamma}_1|+3) \log^2(|\ensuremath{\gamma}_2|+3)
\max_{\substack{0<|\ensuremath{\gamma}_1-\ensuremath{\gamma}_1'|\leqslant 1 \\
0<|\ensuremath{\gamma}_2-\ensuremath{\gamma}_2'|\leqslant 1}} \int_{2^n}^{2^{n+1}} e^{iy(\ensuremath{\gamma}_1-\ensuremath{\gamma}_2)} \\ \times
\iiiint\limits_{[0,2\ensuremath{\varepsilon}]^4} \frac{ e^{y(-v_1-v_2)}} {(u_1-v_1+i\xi_1)(\rho_1-v_1)(u_2-v_2+i\xi_2)(\rho_2-v_2)} du_j dv_j\, dy, \end{multline} where $\xi_1=\ensuremath{\gamma}_1-\ensuremath{\gamma}_1'$ and $\xi_2=-(\ensuremath{\gamma}_2-\ensuremath{\gamma}_2')$. Let $$
M(\ensuremath{\gamma}) = \max_{\substack{|\ensuremath{\gamma}-\ensuremath{\gamma}_1|\leqslant 1 \\ 0 < |\ensuremath{\gamma}_1-\ensuremath{\gamma}_1'| < 1}} \frac{2}{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_1'|}. $$ By Lemmas \ref{zeta} and \ref{Qint},
the terms with $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|<1$ contribute \begin{align*}
&\ll \sum_{|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1} \frac{
\log^2(|\ensuremath{\gamma}_1|+3)\log^2(|\ensuremath{\gamma}_2|+3)}
{|\ensuremath{\gamma}_1\ensuremath{\gamma}_2|} \int_{2^n}^{2^{n+1}} \frac{1}{y^2}
\prod_{j=1}^2 \log
\(\min\(2y,\frac{2}{|\ensuremath{\gamma}_j - \ensuremath{\gamma}_j'|}\)\) \, dy \\
&\ll \frac{1}{2^{n}} \sum_{\ensuremath{\gamma}_1} \frac{\log^5 (|\ensuremath{\gamma}_1|+3)}{|\ensuremath{\gamma}_1|^2} \log^2 \( \min(2^{n+2},M(\ensuremath{\gamma})) \) \, = o\pfrac{n^2}{2^n} \qquad (n\to\infty). \end{align*}
Now suppose $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|>1$. With $\ensuremath{\gamma}_1,\ensuremath{\gamma}_2,\ensuremath{\gamma}_1',\ensuremath{\gamma}_2'$ all fixed, let $\Delta=\ensuremath{\gamma}_1-\ensuremath{\gamma}_2$. Fixing $u_1,v_1,u_2,v_2$, we integrate over $y$ first. The quintuple integral in \eqref{Sig211} is $J(2^{n+1})-J(2^n)$, where $$ J(y)=e^{iy\Delta} \iiiint\limits_{[0,2\ensuremath{\varepsilon}]^4} \frac{e^{-y(v_1+v_2)}} {(i\Delta-v_1-v_2)\prod_{j=1}^2 (u_j-v_j+i\xi_j)(\rho_j-v_j)} \, du_j dv_j. $$ Using $$ \frac{1}{i\Delta-v_1-v_2}=\frac{1}{i\Delta}\sum_{k=0}^\infty \pfrac{v_1+v_2}{i\Delta}^k=\sum_{a,b\geqslant 0} \binom{a+b}{a} \frac{v_1^a v_2^b} {(i\Delta)^{a+b}}, $$ together with Lemma \ref{Qint}, yields $$
|J(y)| \ll \frac{\log^2 y}{|\rho_1 \rho_2 \Delta| y^2} \sum_{a,b\geqslant 0}
\binom{a+b}{a} \pfrac{4\ensuremath{\varepsilon}}{|\Delta|}^{a+b} \ll
\frac{\log^2 y}{|\rho_1 \rho_2 \Delta| y^2}. $$ Therefore, by Lemma \ref{sumg1g2}, $$
\sum_{\ensuremath{\gamma}_1,\ensuremath{\gamma}_2} \log^2(|\ensuremath{\gamma}_1|+3) \log^2(|\ensuremath{\gamma}_2|+3)
\max_{\substack{0<|\ensuremath{\gamma}_1-\ensuremath{\gamma}_1'|\leqslant 1 \\
0<|\ensuremath{\gamma}_2-\ensuremath{\gamma}_2'|\leqslant 1}} |J(2^{n+1})-J(2^n)| \ll \frac{n^2}{2^{2n}}, $$ and hence \begin{equation}\label{Sigma21}
\int_{2^n}^{2^{n+1}} \left| \Sigma_{2,1}(y) \right|^2 = o(n^2 2^{-n}). \end{equation} Define $$ \Sigma_2(x;\chi) = (\log x) \sum_{j=1}^4 \Sigma_{2,j}(\log x). $$ By \eqref{Sigma22}, \eqref{Sigma23}, \eqref{Sigma24} and \eqref{Sigma21}, \[
\int_2^Y |\Sigma_2(e^y;\chi)|^2\, dy \leqslant 4 \sum_{j=1}^4 \sum_{n\leqslant \frac{\log Y}{\log 2} + 1} 2^{2n}
\int_{2^n}^{2^{n+1}} |\Sigma_{2,j}(y)|^2\, dy = o (Y \log^2 Y) \qquad (Y\to\infty). \] This completes the proof of Theorem \ref{doubleint}. \end{proof}
\section{Proof of Lemma \ref{TT0}}\label{sec:TT0}
Put $y=\log x$. For any $\ensuremath{\gamma}$ we have \begin{align*} \int_0^{2 \ensuremath{\varepsilon}-2^{-n}} &\frac{ e^{-yv}}{\frac{1}{2}-v+i \ensuremath{\gamma}} \int_{v+2^{-n}}^{2 \ensuremath{\varepsilon}} \frac{du}{(u-v)(\frac{1}{2}-u+i \ensuremath{\gamma})} dv \\ &= \int_0^{2 \ensuremath{\varepsilon}-2^{-n}} e^{-yv} \Big(\frac{1}{\frac{1}{2}+i\ensuremath{\gamma}} +
O(\frac{v}{\frac{1}{4}+{\ensuremath{\gamma}}^2}) \Big) \int_{v+2^{-n}}^{2 \ensuremath{\varepsilon}} \Big(\frac{1}{\frac{1}{2}+i\ensuremath{\gamma}} + O(\frac{u}{\frac{1}{4}+{\ensuremath{\gamma}}^2}) \Big) \frac{du}{u-v} dv \\ &= \frac{M+E}{(1/2+i\ensuremath{\gamma})^2}, \end{align*} where $$ M = \int_0^{2 \ensuremath{\varepsilon}-2^{-n}} e^{-yv} \( \log(2\ensuremath{\varepsilon}-v)+\log 2^n \)\, dv = \frac{\log y + O(1)}{y} $$ and \begin{align*} E &\ll \int_0^{2\ensuremath{\varepsilon}-2^{-n}} e^{-yv} \int_{v+2^{-n}}^{2\ensuremath{\varepsilon}} \frac{u}{u-v}\, du\, dv \\ &\ll \int_0^{2\ensuremath{\varepsilon}-2^{-n}} e^{-yv} \( 1 + v\log 2^n + v\log (2\ensuremath{\varepsilon}-v) \) \, dv \ll \frac{1}{y}. \end{align*}
Hence, the zeros with $|\ensuremath{\gamma}| \leqslant T_0$ contribute $$
\frac{2\log\log x}{\log x} \sum_{\substack{|\ensuremath{\gamma}|\leqslant T_0 \\ \ensuremath{\gamma} \text{ distinct}}}
\frac{m^2(\ensuremath{\gamma})x^{i\ensuremath{\gamma}}}{1/2+i\ensuremath{\gamma}} + O\pfrac{\log^3 T_0}{\log x}. $$
Next, let $\Sigma_3(x;T_0)$ be the sum over zeros with $T_0 < |\ensuremath{\gamma}| \leqslant T$. We have \begin{multline}\label{lem3.7}
\int_{2^n}^{2^{n+1}} |\Sigma_3(e^y,T_0)|^2 dy \leqslant
\sum_{T_0 \leqslant |\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \leqslant T} 2^{2n+2} m(\ensuremath{\gamma}_1) m(\ensuremath{\gamma}_2) \(\frac{1}{2} + i\ensuremath{\gamma}_1\)
\(\frac{1}{2} - i\ensuremath{\gamma}_2\) \\ \int_{2^n}^{2^{n+1}} e^{y i (\ensuremath{\gamma}_1 - \ensuremath{\gamma}_2)} \iiiint\limits_{u_j\geqslant v_j+2^{-n}} \frac{e^{-yv_1-yv_2}}{\prod_{j=1}^2 (u_j-v_j)(\frac{1}{2}-v_j+i \ensuremath{\gamma}_j) (\frac{1}{2}-u_j+i \ensuremath{\gamma}_j)} du_j dv_j\ dy. \end{multline}
The sum over $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|<1$ on the right side of \eqref{lem3.7} is \begin{align*}
&\ll \sum_{\substack{T_0 \leqslant |\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \leqslant T \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1}}
\frac{2^{2n}m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}{|\ensuremath{\gamma}_1||\ensuremath{\gamma}_2|} \int_{2^n}^{2^{n+1}}
\iiiint\limits_{u_j\geqslant v_j+2^{-n}}
\frac{ e^{-yv_1-yv_2}}{(u_1-v_1)(u_2-v_2)\, }du_j dv_j\, dy \\
&\ll \sum_{\substack{T_0 \leqslant |\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \leqslant T \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| < 1}}
\frac{n^22^nm(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}{|\ensuremath{\gamma}_1||\ensuremath{\gamma}_2|}
\ll n^2 2^n \sum_{|\ensuremath{\gamma}| \geqslant T_0} \frac{\log^3(|\ensuremath{\gamma}|+3)}
{|\ensuremath{\gamma}|} \ll \frac{n^2 2^n \log^5 T_0}{{T_0}}, \end{align*}
applying Lemma~\ref{NTchi}. The terms where $|\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|>1$ on the right hand side of \eqref{lem3.7} total \begin{align*}
&\ll \sum_{\substack{T_0 \leqslant |\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \leqslant T \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| > 1}}
\frac{2^{2n}m(\ensuremath{\gamma}_1)m(\ensuremath{\gamma}_2)}{|\ensuremath{\gamma}_1||\ensuremath{\gamma}_2||\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \iiiint\limits_{u_j\geqslant v_j+2^{-n}}
\frac{ e^{-2^nv_1-2^nv_2}}{(u_1-v_1)(u_2-v_2)}du_j d v_j \\
&\ll \sum_{\substack{T_0 \leqslant |\ensuremath{\gamma}_1|, |\ensuremath{\gamma}_2| \\ |\ensuremath{\gamma}_1-\ensuremath{\gamma}_2| > 1}}
\frac{n^2\log(|\ensuremath{\gamma}_1|+3)\log(|\ensuremath{\gamma}_2|+3)}{|\ensuremath{\gamma}_1||\ensuremath{\gamma}_2||\ensuremath{\gamma}_1-\ensuremath{\gamma}_2|} \ll n^2 \frac{\log^5 T_0}{T_0}. \end{align*} by Lemma \ref{sumg1g2}. Summing over $n$ proves the lemma.
\end{document} | arXiv |
\begin{document}
\begin{abstract} We study the representation theory of quantizations of Gieseker moduli spaces. Namely, we prove the localization theorems for these algebras, describe their finite dimensional representations and two-sided ideals as well as their categories $\mathcal{O}$ in some special cases. We apply this to prove our conjecture with Bezrukavnikov on the number of finite dimensional irreducible representations of quantized quiver varieties for quivers of affine type. \end{abstract} \title{Etingof conjecture for quantized quiver varieties II: affine quivers} \tableofcontents \section{Introduction}\label{S_intro} \subsection{Classical and quantum quiver varieties}\label{SS_quiver_intro} This paper continues the study of the representation theory of quantized quiver varieties initiated in \cite{BL}. So we start by recalling Nakajima quiver varieties and their quantizations.
Let $Q$ be a quiver (=oriented graph, we allow loops and multiple edges). We can formally represent $Q$ as a quadruple $(Q_0,Q_1,t,h)$, where $Q_0$ is a finite set of vertices, $Q_1$ is a finite set of arrows, $t,h:Q_1\rightarrow Q_0$ are maps that to an arrow $a$ assign its tail and head. In this paper we are interested in the case when $Q$ is of affine type, i.e., $Q$ is an extended quiver of type $A,D,E$.
Pick vectors $v,w\in \ZZ_{\geqslant 0}^{Q_0}$ and vector spaces $V_i,W_i$ with $\dim V_i=v_i, \dim W_i=w_i$. Consider the (co)framed representation space $$R=R(v,w):=\bigoplus_{a\in Q_1}\Hom(V_{t(a)},V_{h(a)})\oplus \bigoplus_{i\in Q_0} \Hom(V_i,W_i).$$ We will also consider its cotangent bundle $T^*R=R\oplus R^*$, this is a symplectic vector space that can be identified with $$\bigoplus_{a\in Q_1}\left(\Hom(V_{t(a)},V_{h(a)})\oplus \Hom(V_{h(a)}, V_{t(a)})\right)\oplus \bigoplus_{i\in Q_0} \left(\Hom(V_i,W_i)\oplus \Hom(W_i,V_i)\right).$$ The group $G:=\prod_{k\in Q_0}\GL(V_k)$ naturally acts on $T^*R$ and this action is Hamiltonian. Its moment map $\mu:T^*R\rightarrow \mathfrak{g}^*$ is dual to $x\mapsto x_R:\mathfrak{g}\rightarrow \C[T^*R]$, where $x_R$ stands for the vector field on $R$ induced by $x\in \mathfrak{g}$.
Fix a stability condition $\theta\in \mathbb{Z}^{Q_0}$ that is thought as a character of $G$ via $\theta((g_k)_{k\in Q_0})=\prod_{k\in Q_0}\det(g_k)^{\theta_k}$. Then, by definition, the quiver variety $\M^\theta(v,w)$ is the GIT Hamiltonian reduction $\mu^{-1}(0)^{\theta-ss}\quo G$. We are interested in two extreme cases: when $\theta$ is generic (and so $\M^\theta(v,w)$ is smooth and symplectic) and when $\theta=0$ (and so $\M^\theta(v,w)$ is affine). We will write $\M(v,w)$ for $\operatorname{Spec}(\C[\M^\theta(v,w)])$, this is an affine variety independent of $\theta$ and a natural projective morphism $\rho:\M^\theta(v,w)\rightarrow \M(v,w)$ is a resolution of singularities.
Under an additional restriction on $v$, we have the equality $\M(v,w)=\M^0(v,w)$. Namely, let $\mathfrak{g}(Q)$ be the affine Kac-Moody algebra associated to $Q$. Let us set $\omega:=\sum_{i\in Q_0}w_i\omega^i, \nu:=\omega-\sum_{i\in Q_0}v_i\alpha^i$, where we write $\omega^i$ for the fundamental weight and $\alpha^i$ for a simple root corresponding to $i\in Q_0$. Then we have $\M^0(v,w)=\M(v,w)$ provided $\nu$ is dominant.
Note also that we have compatible $\C^\times$-actions on $\M^\theta(v,w),\M(v,w)$ induced from the action on $T^*R$ given by $t.(r,\alpha):=(t^{-1}r,t^{-1}\alpha), r\in R, \alpha\in R^*$.
A special case of most interest and importance for us in this paper is the Gieseker moduli spaces $\M^\theta(n,r)$ where $n,r\in \mathbb{Z}_{>0}$. It corresponds to the case when $Q$ is a quiver with a single vertex and a single arrow (that is a loop), with $v=n, w=r$. This space parameterizes torsion free sheaves of rank $r$ and degree $n$ on $\mathbb{P}^2$ trivialized at the line at infinity (but we will not need this description). The importance of this case in our work is of the same nature as in the work of Maulik and Okounkov, \cite{MO}, on computing the quantum cohomology of quiver varieties.
Now let us proceed to the quantum setting. We will work with quantizations of $\M^\theta(v,w),\M(v,w)$. Consider the algebra $D(R)$ of differential operators on $R$. The group $G$ naturally acts on $D(R)$ with a quantum comoment map $\Phi:\mathfrak{g}\rightarrow D(R), x\mapsto x_R$. We can consider the quantum Hamiltonian reduction $\mathcal{A}^0_\lambda(v,w)=[D(R)/D(R)\{x_R-\langle \lambda,x\rangle\}]^G$. It is a quantization of $\M^0(v,w)=\M(v,w)$ when $\nu$ is dominant. In the general case one can define a quantization of $\M(v,w)$ in two equivalent way: as an algebra $\mathcal{A}^0_{\lambda'}(v',w)$ for suitable $\lambda'$ and $v'$ (thanks to quantized LMN isomorphisms from \cite[2.2]{BL}) or as the algebra of global section of a suitable microlocal sheaf on $\M^\theta(v,w)$ (where $\theta$ is generic). Let us recall the second approach. We can microlocalize $D(R)$ to a sheaf in conical topology (i.e., the topology where ``open'' means ``Zariski open'' and $\C^\times$-stable) so that we can consider the restriction of $D(R)$ to the $(T^*R)^{\theta-ss}$, let $\mathcal{D}^{\theta-ss}$ denote the restriction. Let $\pi$ stand for the quotient morphism $\mu^{-1}(0)^{\theta-ss}\rightarrow\mu^{-1}(0)^{\theta-ss}/G=\M^\theta(v,w)$. Let us notice that $\mathcal{D}^{\theta-ss}/\mathcal{D}^{\theta-ss}\{x_R-\langle \lambda,x\rangle\}$ is scheme-theoretically supported on $\mu^{-1}(0)^{\theta-ss}$ and so can be regarded as a sheaf in conical topology on that variety. Set $$\mathcal{A}^\theta_{\lambda}(v,w):=[\pi_*(\mathcal{D}^{\theta-ss}/\mathcal{D}^{\theta-ss}\{x_R-\langle \lambda,x\rangle\})]^G,$$ this is a sheaf (in conical topology) of filtered algebras on $\M^\theta(v,w)$ such that $\operatorname{gr}\mathcal{A}^\theta_{\lambda}(v,w)=\mathcal{O}_{\M^\theta(v,w)}$. By the Grauert-Riemenschneider theorem, $H^i(\mathcal{O}_{\M^\theta(v,w)})=0$ for $i>0$. It follows that $\mathcal{A}^\theta_\lambda(v,w)$ has no higher cohomology as well, and $\operatorname{gr} \Gamma(\mathcal{A}^\theta_\lambda(v,w))=\C[\M(v,w)]$. One can show, see \cite[3.3]{BPW} or \cite[2.2]{BL}, that $\Gamma(\mathcal{A}^\theta_\lambda(v,w))$ is independent of the choice of $\theta$. We will write $\mathcal{A}_\lambda(v,w)$ for $\Gamma(\mathcal{A}^\theta_\lambda(v,w))$.
In this paper we will be interested in the representation theory of the algebras $\mathcal{A}_\lambda(v,w)$ and, especially, of $\mathcal{A}_\lambda(n,r)$ (quantizations of $\M(n,r)$). Let us point out that the representations of $\mathcal{A}_\lambda^\theta(v,w)$ and of $\mathcal{A}_\lambda(v,w)$ are closely related. Namely, we can consider the category of coherent $\mathcal{A}_\lambda^\theta(v,w)$-modules to be denoted by $\mathcal{A}_\lambda^\theta(v,w)\operatorname{-mod}$ and the category $\mathcal{A}_\lambda(v,w)\operatorname{-mod}$ of all finitely generated $\mathcal{A}_\lambda(v,w)$-modules. When the homological dimension of $\mathcal{A}_\lambda(v,w)$ is finite, we get adjoint functors $$R\Gamma_\lambda^\theta: D^b(\mathcal{A}_\lambda^\theta(v,w)\operatorname{-mod})\rightleftarrows D^b(\mathcal{A}_\lambda(v,w)\operatorname{-mod}):L\Loc_\lambda^\theta,$$ where $R\Gamma_\lambda^\theta$ is the derived global section functor, and $L\operatorname{Loc}_\lambda^\theta:=\mathcal{A}_\lambda^\theta(v,w)\otimes^L_{\mathcal{A}_\lambda(v,w)}\bullet$. It turns out that these functors are equivalences, \cite{MN_der}. In particular, they restrict to mutually inverse equivalences \begin{equation}\label{eq:fin_dim_equi} D^b_{\rho^{-1}(0)}(\mathcal{A}_\lambda^\theta(v,w)\operatorname{-mod})\rightleftarrows D^b_{fin}(\mathcal{A}_\lambda(v,w)\operatorname{-mod}), \end{equation} where on the left hand side we have the category of all complexes with homology supported on $\rho^{-1}(0)$, while on the right hand side we have all complexes with finite dimensional homology.
\subsection{Results in the Gieseker cases}\label{SS_Gies_result} In this paper we are mostly dealing with the algebras $\mathcal{A}_\lambda(n,r)$. Note that $R=\C\oplus \bar{R}$, where $\bar{R}:=\mathfrak{sl}_n(\C)\oplus \Hom(\C^n,\C^r)$ and the action of $G$ on $\C$ is trivial. So we have $\M^\theta(n,r)=\C^2\times \bar{\M}^\theta(n,r)$ and $\mathcal{A}_\lambda(n,r)=D(\C)\otimes\bar{\mathcal{A}}_{\lambda}(n,r)$, where $\bar{\M}^\theta(n,r),\bar{\mathcal{A}}_\lambda(n,r)$ are the reductions associated to the $G$-action on $\bar{R}$. We will consider the algebra $\bar{\mathcal{A}}_\lambda(n,r)$ rather than $\mathcal{A}_\lambda(n,r)$, all interesting representation theoretic questions about $\mathcal{A}_\lambda(n,r)$ can be reduced to those about $\bar{\mathcal{A}}_\lambda(n,r)$.
There is one case that was studied very explicitly in the last decade: $r=1$. Here the variety $\M^\theta(n,1)$ is the Hilbert scheme $\operatorname{Hilb}^n(\C^2)$ of $n$ points on $\C^2$ and $\M(r,n)=\C^{2n}/\mathfrak{S}_n$ (the $n$th symmetric power of $\C^2$). The quantization $\bar{\mathcal{A}}_\lambda(n,r)$ is the spherical subalgebra in the Rational Cherednik algebra $H_\lambda(n)$ for the pair $(\mathfrak{h},\mathfrak{S}_n)$, where $\mathfrak{h}$ is the reflection representation of $\mathfrak{S}_n$, see \cite{GG} for details. The representation theory of $\bar{\mathcal{A}}_\lambda(n,1)$ was studied, for example, in \cite{BEG,GS1,GS2,rouqqsch,KR,BE,sraco,Wilcox}. In particular, it is known \begin{enumerate} \item when (=for which $\lambda$) this algebra has finite homological dimension, \cite{BE}, \item how to classify its finite dimensional irreducible representations, \cite{BEG}, \item how to compute characters of irreducible modules in the so called category $\mathcal{O}$, \cite{rouqqsch}, \item how to determine the supports of these modules, \cite{Wilcox}, \item how to describe the two-sided ideals of $\bar{\mathcal{A}}_\lambda(n,1)$, \cite{sraco}, \item when an analog of the Beilinson-Bernstein localization theorem holds, \cite{GS1,KR}. \end{enumerate} We will address analogs of (1),(2),(5),(6) for $\bar{\mathcal{A}}_\lambda(n,r)$ (as well as relatively easy parts of (3) and (4)) in the present paper. We plan to address an analog of (4) in a subsequent paper, while (3) is a work in progress.
Before we state our main results, let us point out that there is yet another case when the algebra $\bar{\mathcal{A}}_\lambda(n,r)$ is classical, namely when $n=1$. In this case, $\bar{\mathcal{A}}_\lambda(1,r)=D^\lambda(\mathbb{P}^{r-1})$, the algebra of $\lambda$-twisted differential operators on $\mathbb{P}^{r-1}$.
First, let us give answers to (1) and (6).
\begin{Thm}\label{Thm:loc} The following is true. \begin{enumerate} \item The algebra $\bar{\mathcal{A}}_\lambda(n,r)$ has finite global dimension (equivalently, $R\Gamma_\lambda^\theta$ is an equivalence) if and only if $\lambda$ is not of the form $\frac{s}{m}$, where $1\leqslant m\leqslant n$ and $-rm<s<0$. \item For $\theta>0$, the abelian localization holds for $\lambda$ (i.e., $\Gamma_\lambda^\theta$ is an equivalence) if $\lambda$ is not of the form $\frac{s}{m}$, where $1\leqslant m\leqslant n$ and $s<0$. For $\theta<0$, the abelian localization holds for $\lambda$ if and only if $\lambda$ is not of the form $\frac{s}{m}$ with $1\leqslant m\leqslant n$ and $s> -rm$. \end{enumerate} \end{Thm}
In fact part (2) is a straightforward consequence of (1) and results of McGerty and Nevins, \cite{MN_ab}.
Let us proceed to classification of finite dimensional representations.
\begin{Thm}\label{Thm:fin dim} The following holds. \begin{enumerate} \item The sheaf $\bar{\mathcal{A}}_\lambda^\theta(n,r)$ has a representation supported on $\bar{\rho}^{-1}(0)$ if and only if $\lambda=\frac{s}{n}$ with $s$ and $n$ coprime. If that is the case, then the category $\bar{\mathcal{A}}_\lambda^\theta(n,r)\operatorname{-mod}_{\rho^{-1}(0)}$ is equivalent to $\operatorname{Vect}$. \item The algebra $\bar{\mathcal{A}}_\lambda(n,r)$ has a finite dimensional representation if and only if $\lambda=\frac{s}{n}$ with $s$ and $n$ coprime and the homological dimension of $\bar{\mathcal{A}}_\lambda(n,r)$ is finite. If that is the case, then the category $\bar{\mathcal{A}}_\lambda^\theta(n,r)\operatorname{-mod}_{\rho^{-1}(0)}$ is equivalent to $\operatorname{Vect}$. \end{enumerate} \end{Thm} In fact, (2) is an easy consequence of (1) and Theorem \ref{Thm:loc}.
Now let us proceed to the description of two-sided ideals (in the finite homological dimension case).
\begin{Thm}\label{Thm:ideals} Assume that $\bar{\mathcal{A}}_\lambda(n,r)$ has finite homological dimension and let $m$ stand for the denominator of $\lambda$ (equal to $+\infty$ if $\lambda$ is not rational). Then there are $\lfloor n/m\rfloor$ proper two-sided ideals in $\bar{\mathcal{A}}_\lambda(n,r)$, all of them are prime, and they form a chain. \end{Thm}
Finally, let us explain some partial results on a category $\mathcal{O}$ for $\bar{\mathcal{A}}^\theta_\lambda(n,r)$, we will recall necessary definitions below in Subsection \ref{SS_Cat_O}. We use the notation $\mathcal{O}(\mathcal{A}^{\theta}_{\lambda}(n,r))$ for this category. What we need to know now is the following: \begin{itemize} \item The category $\mathcal{O}(\mathcal{A}^{\theta}_{\lambda}(n,r))$ is a highest weight category so it makes sense to speak about standard objects $\Delta(p)$. \item The labeling set for standard objects is naturally identified with the set of $r$-multipartitions of $n$. \end{itemize}
\begin{Thm}\label{Thm:cat_O_easy} If the denominator of $\lambda$ is bigger than $n$, then the category $\mathcal{O}(\mathcal{A}^\theta_\lambda(n,r))$ is semisimple. If the denominator of $\lambda$ equals $n$, the category $\mathcal{O}(\mathcal{A}^\theta_\lambda(n,r))$ has only one nontrivial block. That block is equivalent to the nontrivial block of $\mathcal{O}(\mathcal{A}^\theta_{1/nr}(nr,1))$. \end{Thm} In some cases, we can say which simple objects belong to the nontrivial block, we will do this below.
\subsection{Counting result} We are going to describe $K_0(\mathcal{A}^\theta_\lambda(v,w)\operatorname{-mod}_{\rho^{-1}(0)})$ (we always consider complexified $K_0$) in the case when $Q$ is of affine type, confirming \cite[Conjecture 1.1]{BL} in this case. The dimension of this $K_0$ coincides with the number of finite dimensional irreducible representations of $\mathcal{A}_\lambda(v,w)$ provided $\lambda$ is regular, i.e., the homological dimension of $\mathcal{A}_\lambda(v,w)$ is finite.
Recall that, by \cite{Nakajima}, the homology group $H_{mid}(\M^\theta(v,w))$ (where ``mid'' stands for $\dim_\C \M^\theta(v,w)$) is identified with the weight space $L_\omega[\nu]$ of weight $\nu$ (see Subsection \ref{SS_quiver_intro}) in the irreducible integrable $\mathfrak{g}(Q)$-module $L_\omega$ with highest weight $\omega$. Further, by \cite{BarGin}, we have a natural inclusion $K_0(\mathcal{A}_\lambda(v,w)\operatorname{-mod}_{\rho^{-1}(0)})\hookrightarrow H_{mid}(\M^\theta(v,w))$ given by the characteristic cycle map $\mathsf{CC}_\lambda$. We will elaborate on this below in Subsection \ref{SS_loc}. We want to describe the image of $\mathsf{CC}_\lambda$.
Following \cite{BL}, we define a subalgebra $\a(=\a_\lambda)\subset\mathfrak{g}(Q)$ and an $\a$-submodule $L_\omega^{\a}\subset L_\omega$. By definition, $\a$ is spanned by the Cartan subalgebra $\mathfrak{t}\subset \mathfrak{g}(Q)$ and all root spaces $\mathfrak{g}_\beta(Q)$ where $\beta=\sum_{i\in Q_0}b_i\alpha^i$ is a real root with $\sum_{i\in Q_0}b_i\lambda_i\in \mathbb{Z}$. For $L_\omega^{\a}$ we take the $\a$-submodule of $L_\omega$ generated by the extremal weight spaces (those where the weight is conjugate to the highest one under the action of the Weyl group).
\begin{Thm}\label{Thm:counting} Let $Q$ be of affine type. The image of $K_0(\mathcal{A}_\lambda(v,w)\operatorname{-mod}_{\rho^{-1}(0)})$ in $L_\omega[\nu]$ under $\mathsf{CC}_\lambda$ coincides with $L_\omega^\a\cap L_\omega[\nu]$. \end{Thm}
\subsection{Content of the paper} Section \ref{S_prelim} contains some known results and construction. In Section \ref{S_parab_ind} we introduce our main tool for inductive study of categories $\mathcal{O}$. In Section \ref{S_fin_dim} we will prove Theorems \ref{Thm:fin dim} (most of it, in fact) \ref{Thm:cat_O_easy}. In Section \ref{S_loc} we prove Theorem \ref{Thm:loc}. Finally, in Section \ref{S_aff_wc_count} we prove Theorem \ref{Thm:ideals} and also complete the proof of Theorem \ref{Thm:counting}. In the beginning of each section, its content is described in more detail.
{\bf Acknowledgments}. I would like to thank Roman Bezrukavnikov, Dmitry Korb, Davesh Maulik, Andrei Okounkov and Nick Proudfoot for stimulating discussions. My work was supported by the NSF under Grant DMS-1161584.
\section{Preliminaries}\label{S_prelim} This section basically contains no new results. We start with discussing conical symplectic resolutions. Then, in Subsection \ref{SS_Gies}, we list some further properties of Gieseker moduli spaces. Subsection \ref{SS_leaves} describes the symplectic leaves of the varieties $\M(v,w)$.
After that, we proceed to quantizations. We discuss some further properties, with emphasis on the Gieseker case, in Subsection \ref{SS_quant}. We discuss (derived and abelian) localization theorems for quantized quiver varieties, Subsection \ref{SS_loc}. Then we proceed to the homological duality and wall-crossing functors, one of our main tools to study the representation theory of quantized quiver varieties, Subsection \ref{SS_dual_WC}. In Subsection \ref{SS_Cat_O}, we recall the definition of categories $\mathcal{O}$ and list some basic properties. Then, in Subsection \ref{SS_HC_bimod}, we recall one more important object in this representation theory, Harish-Chandra bimodules. Our main tool to study those is restriction (to so called {\it quantum slices}) functors, defined in this context in \cite{BL}. We recall quantum slices in Subsection \ref{SS_quant_slice} and the restriction functors in Subsection \ref{SS_restr_fun}.
\subsection{Symplectic resolutions}\label{SS_sympl_res} Although in this paper we are primarily interested in the case of Nakajima quiver varieties for quiver of affine types (and, more specifically, Gieseker moduli spaces) some of our results easily generalize to symplectic resolutions of singularities. Here we recall the definition and describe some structural theory of these varieties due to Namikawa, \cite{Namikawa}. Our exposition follows \cite[Section 2]{BPW}.
Let $X$ be a smooth symplectic algebraic variety. By definition, $X$ is called a symplectic resolution of singularities if $\C[X]$ is finitely generated and the natural morphism $X\rightarrow X_0:=\operatorname{Spec}(\C[X])$ is a resolution of singularities. In this paper we only consider symplectic resolutions $X$ that are projective over $X_0$. We also only care about resolutions coming with additional structure, a $\C^\times$-action satisfying the following two conditions: \begin{itemize} \item The grading induced by the $\C^\times$-action on $\C[X]$ is positive, i.e., $\C[X]=\bigoplus_{i\geqslant 0}\C[X]_i$ and $\C[X]_0=\C$. \item $\C^\times$ rescales the symplectic form $\omega$, more precisely, there is a positive integer $d$ such that $t.\omega=t^d\omega$ for all $t\in \C^\times$. \end{itemize} We call $X$ equipped with such a $\C^\times$-action a {\it conical symplectic resolution}.
We remark that $X$ admits a universal Poisson deformation, $\widetilde{X}$, over $H^2(X)$. This deformation comes with a $\C^\times$-action and the $\C^\times$-action contracts $\tilde{X}$ to $X$, see \cite[2.2]{quant} or \cite[2.1]{BPW} for details. The generic fiber of $\widetilde{X}$ is affine.
Namikawa associated a Weyl group $W$ to $X$ that acts on $H^2(X,\mathbb{R})$ as a crystallographic reflection group. We have $\operatorname{Pic}X=H^2(X,\mathbb{Z})$. The (closure of the) movable cone of $X$ in $H^2(X,\mathbb{R})$ is a fundamental chamber for $W$. Furthermore, there are open subset $U\subset X, U'\subset X'$ with complements of codimension bigger than $1$ that are isomorphic. So we get an isomorphism $\operatorname{Pic}(X)\cong\operatorname{Pic}(X')$ that preserves the movable cones.
Namikawa has shown that there are finitely many isomorphism classes of conical symplectic resolutions. Moreover, he proved there is a finite $W$-invariant union $\mathcal{H}$ of hyperplanes in $H^2(X,\mathbb{R})$ with the the following properties: \begin{itemize} \item The union of the complexifications of the hyperplanes in $\mathcal{H}$ is precisely the locus in $H^2(X,\C)$ over which $\widetilde{X}\rightarrow \widetilde{X}_0$ is not an isomorphism. \item The closure of the movable cone is the union of some chambers for $\mathcal{H}$. \item Each chamber inside a movable cone is the nef cone of exactly one symplectic resolution. \end{itemize}
For $\theta\in H^2(X,\mathbb{R})\setminus \mathcal{H}$, let $X^\theta$ be the resolution corresponding to the element of $W\theta$ lying in the movable cone.
We will not need to compute the Namikawa Weyl group. Let us point out that it is trivial provided $X_0$ has no leaves of codimension 2 (we remark that it is known that the number of leaves is always finite).
An example of a conical symplectic resolution is provided by $\M^\theta(v,w)\rightarrow \M(v,w)$ in the case when $Q$ is an affine quiver ($d=2$ in this case). We always have a natural map $\param:=\C^{Q_0}\rightarrow H^2(\M^\theta(v,w))$ that is always injective. In the case of an affine quiver, this map can be actually shown to be an isomorphism, but we will not need that: in the quiver variety setting one can retell the constructions above using $\param$ instead of $H^2(\M^\theta(v,w))$.
\subsection{Gieseker moduli spaces}\label{SS_Gies} We will need some additional facts about varieties $\M^\theta(n,r)$. First of all, let us point out that $\dim \M^\theta(n,r)=2nr$.
Let us note that we have an isomorphism $\M^\theta(n,r)\cong \M^{-\theta}(n,r)$ (of symplectic varieties with $\C^\times$-actions).
Define $R^\vee:=\End(V^*)^{\oplus 2}\oplus \Hom(V^*,W^*)\oplus \Hom(W^*,V^*)$. We have an isomorphism $R\rightarrow R^\vee$ given by $\iota:(A,B,i,j)\mapsto (-B^*, A^*,-j^*, i^*)$, here we write $i$ for an element in $\Hom(W,V)$ and $j$ for an element of $\Hom(V,W)$. This is a symplectomorphism. Choosing bases in $V$ and $W$, we identify $R$ with $R^\vee$. Note that, under this identification, $\iota$ is not $G$-equivariant, we have $\iota(g.r)=(g^t)^{-1}\iota(r)$, where the superscript ``t'' stands for the matrix transposition. Also note that $(A,B,i,j)$ is $\det$-stable (equivalently, there is no nonzero $A,B$-stable subspace in $\ker j$) if and only if $(-B^*,A^*,-j^*,i^*)$ is $\det^{-1}$-stable (i.e., $C\langle B^*,A^*\rangle \operatorname{im}j^*=V^*$). It follows that $\M^\theta(n,r)\cong \M^{-\theta}(n,r)$.
We will also need some information on the cohomology of $\M^\theta(n,r)$.
\begin{Lem}\label{Lem:top_cohom} We have $H^i(\M^\theta(n,r))=0$ for odd $i$ or for $i\geqslant 2nr$ and $\dim H^2(\M^\theta(n,r))=\dim H^{2nr-2}(\M^\theta(n,r))=1$. In particular, $\dim H_{2nr-2}(\M^\theta(n,r))=1$. \end{Lem} \begin{proof} That the odd cohomology groups vanish is \cite[Theorem 3.7,(4)]{NY} (or a general fact about symplectic resolutions, see \cite[Proposition 2.5]{BPW}). According to \cite[Theorem 3.8]{NY}, we have
$$\sum_{i}\dim H^{2i}(\M^\theta(n,r))t^i=\sum_\lambda t^{\sum_{i=1}^r (r|\lambda^{(i)}|-i (\lambda^{(i)t})_1)},$$ where the summation is over the set of the $r$-multipartitions $\lambda=(\lambda^{(1)},\ldots,\lambda^{(r)})$. The highest power of $t$ in the right hand side is $rn-1$, it occurs for a single $\lambda$, namely, for $\lambda=((n),\varnothing,\ldots,\varnothing)$. This shows $\dim H^{2nr-2}(\M^\theta(n,r))=1$. The equality $\dim H_{2nr-2}(\M^\theta(n,r))=1$ follows. Also there is a single $r$-multipartition of $n$ with
$\sum_{i=1}^r (r|\lambda^{(i)}|-i (\lambda^{(i)t})_1)=1$, it is $(\varnothing,\ldots,1,n-1)$. This implies $\dim H^2(\M^\theta(n,r))=1$.
\end{proof}
It follows, in particular, that the universal deformation of $\M^\theta(n,r)$ coincides with the ``universal quiver variety'' $\M^\theta_{\param}(n,r):=\mu^{-1}(\mathfrak{g}^{*G})^{\theta-ss}/G$. The isomorphism $\M^\theta(n,r)\cong \M^{-\theta}(n,r)$ extends to $\M^{-\theta}_{\param}(n,r)\cong \M^\theta_\param(n,r)$ that is, however, not an isomorphism of schemes over $\param$, but rather induces the multiplication by $-1$ on the base.
On $\M^\theta(n,r)$ we have an action of $\GL(r)\times \C^\times$ induced from the following action on $T^*R$: $(X,t).(A,B,i,j)=(tA,t^{-1}B, Xi,jX^{-1})$. We will need a description of certain torus fixed points. First, let $T$ denote the maximal torus in $\GL(r)$. Then, see \cite[Lemma 3.2, Section 7]{Nakajima_tensor}, we see that \begin{equation}\label{eq:fixed_pt_decomp}\M^\theta(n,r)^T=\bigsqcup_{n_1+\ldots+n_r=n}\prod_{i=1}^r \M^\theta(n_i,1),\end{equation} The embedding $\prod_{i=1}^r \M^\theta(n_i,1)\hookrightarrow \M^\theta(n,r)$ is induced from $\bigoplus_{i=1}^r T^*R(n_i,1)\hookrightarrow T^*R(n,r)$.
Now set $\tilde{T}:=T\times \C^\times$. Then $\M^\theta(n,r)^{\tilde{T}}$ is a finite set that is in a natural bijection with the set of the $r$-multipartitions of $n$, this follows from (\ref{eq:fixed_pt_decomp}) and the classical fact that $\M^\theta(n_i,1)^{\C^\times}$ is identified with the set of the partitions of $n_i$. More precisely, $\M^\theta(n_i,1)^{\C^\times}=\M^\theta(n_i,1)^{\C^\times\times \C^\times}$, where the second copy of $\C^\times$ is contracting. When $\theta>0$, we label the fixed point corresponding to $(A,B,i,j)$ (we automatically have $j=0$) by the partitions of $n_i$ into sizes of Jordan blocks of $B$.
\subsection{Symplectic leaves}\label{SS_leaves} Here we want to describe the symplectic leaves of $\M^0_\lambda(v,w):=\mu^{-1}(\lambda)\quo G$ and study the structure of the variety near a symplectic leaf.
Let us, first, study the leaf containing $0\in \M^0(v,w)$. Similarly to Subsection \ref{SS_Gies_result}, consider the space $\bar{R}$ that is obtained similarly to $R$ but with assigning $\mathfrak{sl}(V_i)$ instead of $\gl(V_i)$ to any loop $a$ with $t(a)=h(a)=i$ so that $R=\overline{R}\oplus \C^k$, where $k$ is the total number of loops. Let $\bar{\M}^0(v,w)$ be the reduction of $T^*\bar{R}$ so that $\M^0(v,w)=\bar{\M}^0(v,w)\times \C^{2k}$.
\begin{Lem}\label{Lem:0_leaf} The point $0$ is a single leaf of $\bar{\M}^0(v,w)$. \end{Lem} \begin{proof} It is enough to show that the maximal ideal of $0$ in $\C[T^*\bar{R}]^G$ is Poisson. Since $\bar{R}$ does not include the trivial $G$-module as a direct summand, we see that all homogeneous elements in $\C[T^*\bar{R}]^G$ have degree $2$ or higher. It follows that the bracket of any two homogeneous elements also has degree 2 or higher and our claim is proved. \end{proof}
Now let us describe the slices to symplectic leaves in $\M^0_\lambda(v,w)$, see, for example, \cite[2.1.6]{BL}. Pick $x\in \M^0_\lambda(v,w)$. We can view $T^*R$ as the representation space of dimension $(v,1)$ for the double $DQ^w$ of the quiver $Q^w$ obtained from $Q$ by adjoining the additional vertex $\infty$ with $w_i$ arrows from $i$ to $\infty$. Pick a semisimple representation of $DQ^w$ lying over $x$. This representation decomposes as $r^0\oplus r^1\otimes U_1\oplus\ldots\oplus r^k\otimes U_k$, where $r^0$ is an irreducible representation of $DQ^w$ with dimension $(v^0,1)$ and $r^1,\ldots,r^k$ are pairwise nonisomorphic irreducible representations of $DQ$ with dimension $v^1,\ldots,v^k$. All representations $r^0,\ldots,r^k$ are mapped to $\lambda$ under the moment map. Consider the quiver $\underline{Q}:=\underline{Q}_x$ with vertices $1,\ldots,k$ and $-(v^i,v^j)$ arrows between vertices $i,j$ with $i\neq j$ and $1-\frac{1}{2}(v^i,v^i)$ loops at the vertex $i$. We consider the dimension vector $\underline{v}:=(\dim U_i)_{i=1}^k$ and the framing $\underline{w}=(\underline{w}_i)_{i=1}^k$ with $\underline{w}_i=w\cdot v^i-(v^0,v^i)$.
\begin{Prop}\label{Prop:leaves} The following is true: \begin{enumerate} \item The symplectic leaves of $\M^0_\lambda(v,w)$ are parameterized by the decompositions $v=v^0+\underline{v}_1 v^1\oplus \ldots \oplus \underline{v}_k v^k$ (we can permute summands with $\underline{v}_i=\underline{v}_j$ and $v^i=v^j$) subject to the following conditions: there is an irreducible representation $r^0$ of $DQ^w$ of dimension $(v^0,1)$ and pairwise different irreducible representations $r^1,\ldots,r^k$ of $DQ$ of dimensions $v^1,\ldots,v^k$, all of them mapping to $\lambda$ under the moment map. \item The leaf corresponding to the decomposition as above consists of the isomorphism classes of the representations $r^0\oplus r^1\otimes U_1\oplus\ldots\oplus r^k\otimes U_k$, where $r^0,\ldots,r^k$ are as above. \item There is a transversal slice to the leaf as above that is isomorphic to the formal neighborhood of 0 in the quiver variety $\underline{\bar{\M}}^0_0(\underline{v},\underline{w})$ for the quiver $\underline{Q}$. \end{enumerate} \end{Prop} \begin{proof} We have a decomposition $\M_\lambda^0(v,w)^{\wedge_x}\cong D\times \underline{\bar{\M}}^0(\underline{v},\underline{w})^{\wedge_0}$ of Poisson formal schemes, where $D$ stands for the symplectic formal disk and $\bullet^{\wedge_x}$ indicates the formal neighborhood of $x$. From Lemma \ref{Lem:0_leaf} it now follows that the locus described in (2) is a union of leaves. So in order to prove the entire proposition, it remains to show that the locus in (2) is irreducible. This follows from \cite[Theorem 1.2]{CB_geom}. \end{proof}
Now assume that $X\rightarrow X_0$ is a symplectic resolution (not necessarily conical). Then $X_0$ has finitely many symplectic leaves. Pick a point $x\in X_0$ and consider its formal neighborhood $X_0^{\wedge_x}$. Then, according to Kaledin, \cite{Kaledin_sing}, the Poisson formal scheme $X_0^{\wedge_x}$ decomposes into the product of two formal schemes: the symplectic formal disk $D$, and a ``slice'' $X_0'$ that is a Poisson formal scheme, where $x$ is a single symplectic leaf.
\subsection{Quantizations}\label{SS_quant} Let $X=X^\theta$ be a conical symplectic resolution corresponding to a parameter $\theta$. Now let us consider quantizations of $X$. We will work with microlocal quantizations. Those are sheaves $\mathcal{A}^\theta$ of algebras in conical topology equipped with the following additional structures: \begin{itemize} \item a complete and separated ascending $\mathbb{Z}$-filtration, $\mathcal{A}^\theta=\bigcup_{i\in \mathbb{Z}}\mathcal{A}^\theta_{\leqslant i}$, \item an action of $\mathbb{Z}/d\mathbb{Z}$ (where $d$ has the same meaning as above) on $\mathcal{A}^\theta$ by filtered algebra automorphisms such that $1\in \mathbb{Z}/d\mathbb{Z}$ acts on $\mathcal{A}^\theta_{\leqslant i}/\mathcal{A}^\theta_{\leqslant i-1}$ by $\exp(2\pi i\sqrt{-1}/d)$. \item an isomorphism $\operatorname{gr}\mathcal{A}^\theta\cong \mathcal{O}_X$ of sheaves of graded algebras. \end{itemize}
Consider the subsheaf $R_\hbar(\mathcal{A}^\theta)$ of $\mathbb{Z}/d\mathbb{Z}$-invariants in the Rees sheaf $R_{\hbar^{1/d}}(\mathcal{A})$. Completing $R_\hbar(\mathcal{A})$ with respect to the $\hbar$-adic topology, we get a homogeneous quantization of $X$ in the sense of \cite[2.3]{quant}. To get back we take $\C^\times$-finite sections and mod out $\hbar-1$. It follows that the microlocal quantizations of $X$ are canonically parameterized by $H^2(X,\C)$. We write $\mathcal{A}^\theta_{\hat{\lambda}}$ for the quantization corresponding to $\hat{\lambda}\in H^2(X)$ (and we call $\hat{\lambda}$ the {\it period} of the quantization $\mathcal{A}^\theta_{\hat{\lambda}}$). In fact, we can also quantize the universal deformation $\tilde{X}$ by a microlocal sheaf $\tilde{\mathcal{A}}^\theta$ of $\C[H^2(X)]$-algebras (the canonical quantization from \cite{BK,quant}). We remark that $\mathcal{A}_{\hat{\lambda}}^{\theta,opp}\cong \mathcal{A}^\theta_{-\hat{\lambda}}$, as sheaves of algebras on $X$, this follows from the definition of a canonical quantization.
We write $\mathcal{A}_{\hat{\lambda}}, \tilde{\mathcal{A}}$ for the global sections of $\mathcal{A}^\theta_{\hat{\lambda}},\tilde{\mathcal{A}}^\theta$, these algebras are independent of $\theta$ by \cite[3.3]{BPW}.
When $X$ is a quiver variety $\M^\theta(v,w)$, the quantization $\mathcal{A}^\theta_\lambda(v,w)$ satisfies the assumptions above. As we have mentioned in Subsection \ref{SS_sympl_res}, we can embed $\param=\C^{Q_0}$ into $H^2(X)$. We remark however, that $\mathcal{A}^\theta_{\hat{\lambda}}=\mathcal{A}^\theta_{\hat{\lambda}-\varrho}(v,w)$, where $\varrho$ is half the character of the action of $G$ on $\bigwedge^{top}R^*$, see, e.g., \cite[2.2]{BL}.
Let us now consider the Gieseker case. Here $\varrho=r/2$. So we have \begin{equation}\label{eq:opp_iso} \mathcal{A}_{\lambda}(n,r)^{opp}\cong \mathcal{A}_{-\lambda-r}(n,r). \end{equation}
\begin{Lem}\label{Lem:iso} We have $\mathcal{A}_\lambda(n,r)\cong \mathcal{A}_{-\lambda-r}(n,r)$. \end{Lem} \begin{proof} Recall that $\mathcal{A}_{\lambda}(n,r)\xrightarrow{\sim} \Gamma(\mathcal{A}_{\lambda}^{\pm \theta}(n,r))$. Also recall the identification $\M^{\theta}_{\param}(n,r)\rightarrow \M^{-\theta}_{\param}(n,r)$ that induces $-1$ on $\param$. It follows that $\Gamma(\mathcal{A}_{\hat{\lambda}}^\theta)\cong \Gamma(\mathcal{A}^{-\theta}_{-\hat{\lambda}})$. Since $\mathcal{A}_{\hat{\lambda}}^\theta\cong \mathcal{A}_{\hat{\lambda}-r/2}^\theta(n,r)$, our claim follows.
\end{proof}
We conclude that $\mathcal{A}_{\lambda}(n,r)^{opp}\cong \mathcal{A}_{\lambda}(n,r)$.
We remark that the isomorphism of Lemma \ref{Lem:iso} is similar in spirit to isomorphisms from \cite[Section 3]{BPW} provided by the Namikawa Weyl group action. We would like to point out however that our isomorphism does not reduce to that. Indeed, when $r>2$, the Namikawa Weyl group can be shown trivial because there is no symplectic leaf of codimension $2$ in $\M(n,r)$.
\subsection{Localization theorems}\label{SS_loc} We assume that $X$ is a conical symplectic resolution of $X_0:=\operatorname{Spec}(\C[X])$. Recall that we write $\rho:X\rightarrow X_0$ for the canonical morphism. Let $\mathcal{A}^\theta$ be a quantization of $X$ and $\mathcal{A}$ be its algebra of global sections.
Consider the categories of modules $\mathcal{A}\operatorname{-Mod}\supset \mathcal{A}\operatorname{-mod}$ consisting of all and of finitely generated $\mathcal{A}$-modules. Also consider the category $\mathcal{A}^\theta\operatorname{-Mod}$ of all quasi-coherent $\mathcal{A}^\theta$-modules and $\mathcal{A}^\theta\operatorname{-mod}$ of all coherent $\mathcal{A}^\theta$-modules, i.e., modules that have a global good filtration (a filtration is called good if the associated graded object is a coherent sheaf of $\mathcal{O}_X$-modules).
We have the global section, $\Gamma^\theta$, and localization, $\Loc^\theta:=\mathcal{A}^\theta\otimes_{\mathcal{A}}\bullet$, functors $$\Gamma^\theta: \mathcal{A}^\theta\operatorname{-Mod}\rightleftarrows \mathcal{A}\operatorname{-Mod}:\Loc^\theta, \quad \Gamma^\theta: \mathcal{A}^\theta\operatorname{-mod}\rightleftarrows \mathcal{A}\operatorname{-mod}:\Loc^\theta.$$ For objects in $\mathcal{A}^\theta\operatorname{-mod}, \mathcal{A}\operatorname{-mod}$ we can define supports, those are closed $\C^\times$-stable subvarieties in $X,X_0$, respectively. For a subvariety $Y_0\subset X_0$, we write $\mathcal{A}\operatorname{-mod}_{Y_0}$ for the full subcategory of $\mathcal{A}\operatorname{-mod}$ consisting of all modules supported inside $Y_0$. Similarly, for $Y\subset X$, we consider the subcategory $\mathcal{A}^\theta\operatorname{-mod}_Y$. The functors $\Gamma^\theta, \Loc^\theta$ restrict to functors between the subcategories $\mathcal{A}^\theta\operatorname{-mod}_{\rho^{-1}(Y_0)}, \mathcal{A}\operatorname{-mod}_{Y_0}$.
The functors $\Gamma^\theta, \Loc^\theta$ admit derived functors $R\Gamma^\theta: D^b(\mathcal{A}^\theta\operatorname{-Mod})\rightarrow D^b(\mathcal{A}\operatorname{-Mod})$ (given by taking the \v{C}ech complex for a cover by affine open subsets) and $L\Loc^\theta: D^-(\mathcal{A}\operatorname{-Mod})\rightarrow D^-(\mathcal{A}^\theta\operatorname{-Mod})$. If the homological dimension of $\mathcal{A}$ is finite, we also have $L\Loc^\theta:D^b(\mathcal{A}\operatorname{-Mod})\rightarrow D^b(\mathcal{A}^\theta\operatorname{-Mod})$. Clearly, the functors $R\Gamma^\theta, L\Loc^\theta$ preserve the subcategories $D^?(\mathcal{A}^\theta\operatorname{-mod}), D^?(\mathcal{A}\operatorname{-mod})$ (as in the case of usual coherent sheaves, the former is identified with the full subcategory in $D^?(\mathcal{A}\operatorname{-Mod})$ with coherent homology). Also the functors $R\Gamma^\theta,L\Loc^\theta$ restrict to functors between the subcategories $D^?_{Y_0}(\mathcal{A}\operatorname{-mod})$ and $ D^?_{\rho^{-1}(Y_0)}(\mathcal{A}^\theta\operatorname{-mod})$, consisting of all complexes with homology supported on $Y_0, \rho^{-1}(Y_0)$.
Now let us suppose that $X:=\M^\theta(v,w)$ and so $X_0=\M(v,w)$. We will write $\Gamma_\lambda^\theta, R\Gamma_\lambda^\theta$ to indicate the dependence on the quantization parameter $\lambda$.
Let us recall some results on when (i.e., for which $\lambda$) the functors $\Gamma_\lambda^\theta, \Loc_\lambda^\theta$ are derived or abelian equivalences.
\begin{Prop}[\cite{MN_der}]\label{Prop:MN_der} The functor $R\Gamma_\lambda^\theta:D^b(\mathcal{A}_\lambda^\theta(v,w)\operatorname{-mod})\rightarrow D^b(\mathcal{A}_\lambda(v,w)\operatorname{-mod})$ is an equivalence of triangulated categories if and only if $\mathcal{A}_\lambda(v,w)$ has finite homological dimension. In this case, the inverse equivalence is given by $L\Loc_\lambda^\theta$. \end{Prop}
In the situation of the previous proposition, we say that the derived localization holds (for $\lambda$), such parameters are called {\it regular}. \cite[Conjecture 9.1]{BL} describes a precise locus of the singular (=non-regular) parameters $\lambda$ and we prove this conjecture, Theorem \ref{Thm:loc}.
We say that the abelian localization holds for $(\lambda,\theta)$ if $\Gamma_\lambda^\theta$ is an equivalence of abelian categories. The following result was proved in \cite[Corollary 5.12]{BPW} for arbitrary symplectic resolutions.
\begin{Prop}\label{Prop:abel_loc1} For any $\lambda$ there is $k_0\in \mathbb{Z}_{>0}$ such that the abelian localization holds for $(\lambda+k\theta,\theta)$ whenever $k\geqslant k_0$. \end{Prop}
There are also results of McGerty and Nevins, \cite{MN_ab}, that provide a sufficient condition for the functor $\Gamma_\lambda^\theta$ to be exact. We will elaborate on these results applied to the special case of $\mathcal{A}_\lambda(n,r)$ below. We will see that for $\mathcal{A}_\lambda(n,r)$ this sufficient condition is also necessary.
To conclude this section let us mention the characteristic cycle map. Suppose we are in the general case of a projective symplectic resolution $X\rightarrow X_0$. Then to a module $M'\in \mathcal{\mathcal{A}}^\theta\operatorname{-mod}_{\rho^{-1}(0)}$ we can assign its characteristic cycle. By definition, it coincides with the sum of irreducible components of $\rho^{-1}(0)$ with multiplicities, where the multiplicity of the component equals to the generic rank of $\operatorname{gr} M'$ on the component. Of course, the characteristic cycle defines a group homomorphism $\operatorname{CC}^\theta:K_0(\mathcal{A}^\theta\operatorname{-mod}_{\rho^{-1}(0)})\rightarrow H_{mid}(X)$. Now to a module $M\in \mathcal{A}\operatorname{-mod}_{fin}$ we can assign $\operatorname{CC}^\theta(\operatorname{Loc}_\lambda^\theta M)$. It was shown in \cite[3.2]{BL} that, in the quiver variety case, this map is actually independent of $\theta$. Here is another result that will be of crucial importance for us. This was proved in an unpublished work of Baranovsky and Ginzburg.
\begin{Prop}\label{Prop:CC_inject} The map $\operatorname{CC}^\theta$ is injective. \end{Prop}
\subsection{Duality and wall-crossing functor}\label{SS_dual_WC} We still consider a conical projective resolution $X$ of $X_0$. Let us take a quantization $\mathcal{A}^\theta$ of $X$ that satisfies the abelian localization theorem. In this case the homological dimension of $\mathcal{A}$ (equivalently, of $\mathcal{A}^\theta$) does not exceed the homological dimension of $\mathcal{O}_X$ equal to $\dim X$.
It turns out that dimension of support for certain modules can be computed via a suitable functor: the homological duality. Namely, recall the functor $D:=\operatorname{RHom}_{\mathcal{A}}(\bullet, \mathcal{A})[N]$ where $N=\frac{1}{2}\dim X$, defines an equivalence $D^b(\mathcal{A}\operatorname{-mod})\rightarrow D^b(\mathcal{A}^{opp}\operatorname{-mod})^{opp}$. Moreover, for a simple object $L$ in $\mathcal{A}\operatorname{-mod}$ we have $H_i(DL)=0$ if $i>N$ or $i<N-\operatorname{dim}\operatorname{Supp}L$, see \cite[4.2]{BL}. In particular, $L$ is finite dimensional if and only if $H_i(DL)=0$ for $i<N$.
In the case when $\mathcal{A}=\mathcal{A}_{\hat{\lambda}}$, we can view $D$ as a functor $D^b(\mathcal{A}_{\hat{\lambda}}\operatorname{-mod}) \rightarrow D^b(\mathcal{A}_{-\hat{\lambda}}\operatorname{-mod})^{opp}$.
We will need a technical property of $D$.
\begin{Lem}\label{Lem:D_higher_small_supp} Let $L$ be a simple $\mathcal{A}_{\hat{\lambda}}$-module with $\dim \operatorname{Supp}L=\frac{1}{2}\dim X$. Then $H^i(DL)$ is supported on the complement of the open symplectic leaf of $X_0$ for $i>0$. \end{Lem} \begin{proof} As we know from Commutative Algebra, the irreducible components of the support of $\operatorname{Ext}^i(\operatorname{gr}L, \C[X_0])$ intersecting the open leaf have dimension smaller than $\frac{1}{2}\dim X$ provided $i>\frac{1}{2}\dim X$. Thanks to a standard spectral sequence for the homology of a filtered quotient, we see that the irreducible components of $\operatorname{Supp}H^i(DL)$ for $i>0$ intersecting the open leaf have dimension less than $\frac{1}{2}\dim X$. However, no nonzero $\mathcal{A}_{-\hat{\lambda}}$-module can have this property as the support of any $\mathcal{A}_{-\hat{\lambda}}$-module is a coisotropic subvariety by Gabber's theorem. \end{proof}
Now let us consider a different family of functors: wall-crossing functors. Below we will recall a connection of some of those functors with the duality introduce above that was discovered in \cite[Section 4]{BL}. We will make some additional assumptions. Let us assume that the all conical projective symplectic resolutions of $X_0$ are strictly semismall. Recall that $\rho:X\rightarrow X_0$ is called strictly semismall, if all components of $X^i:=\{x\in X| \dim \rho^{-1}(x)=i\}$ have codimension $2i$ and all components of $\rho^{-1}(x)$ have the same dimension.
Pick $\chi\in \operatorname{Pic}(X)$. We can uniquely quantize the corresponding line bundle $\mathcal{O}(\chi)$ on $X$ to a $\mathcal{A}^\theta_{\hat{\lambda}+\chi}$-$\mathcal{A}^\theta_{\hat{\lambda}}$-bimodule to be denoted by $\mathcal{A}^{\theta}_{\hat{\lambda},\chi}$. Let $\mathcal{A}^{(\theta)}_{\hat{\lambda},\chi}$ denote the global sections. We remark that taking the tensor product with $\mathcal{A}^{\theta}_{\hat{\lambda},\chi}$ gives rise to an equivalence $\mathcal{T}^\theta_{\hat{\lambda},\chi}:\mathcal{A}_{\hat{\lambda}}^\theta\operatorname{-Mod}\rightarrow \mathcal{A}^{\theta}_{\hat{\lambda}+\chi}\operatorname{-Mod}$.
Pick a different stability condition $\theta'$ and $\hat{\lambda}'\in \hat{\lambda}+\mathbb{Z}^{Q_0}$ such that the abelian localization holds for $(\hat{\lambda}',\theta')$. We consider the wall-crossing functor $$\mathfrak{WC}_{\hat{\lambda}\rightarrow \hat{\lambda}'}:=\Gamma^{\theta'}_{\hat{\lambda}'}\circ \mathcal{T}^{\theta'}_{\hat{\lambda},\hat{\lambda}'-\hat{\lambda}}\circ L\operatorname{Loc}^{\theta'}_{\hat{\lambda}}:D^b(\mathcal{A}_{\hat{\lambda}}\operatorname{-mod}) \rightarrow D^b(\mathcal{A}_{\hat{\lambda}'}\operatorname{-mod}).$$ Here we write $\Gamma^{\theta'}_{\hat{\lambda}'}$ for the global section functor $\mathcal{A}_{\hat{\lambda}'}^{\theta'}\operatorname{-Mod}\rightarrow \mathcal{A}_{\hat{\lambda}'}\operatorname{-Mod}$. According to \cite[Proposition 6.29]{BPW}, we have $\mathfrak{WC}_{\hat{\lambda}\rightarrow \hat{\lambda}'}=\mathcal{A}_{\hat{\lambda},\hat{\lambda}'-\hat{\lambda}}^{(\theta')}\otimes^L_{\mathcal{A}_{\hat{\lambda}}}\bullet$. In the case of quiver varieties, we will write $\mathfrak{WC}_{\lambda\rightarrow \lambda'}$ for a functor $D^b(\mathcal{A}_\lambda(v,w)\operatorname{-Mod})\rightarrow D^b(\mathcal{A}_{\lambda'}(v,w)\operatorname{-Mod})$.
By a long wall-crossing functor we mean $\mathfrak{WC}_{\hat{\lambda}\rightarrow \hat{\lambda}'}$, where $(\hat{\lambda},\theta),(\hat{\lambda}',-\theta)$ satisfy the abelian localization. A connection between the contravariant duality and the long wall-crossing functor is as follows. We say that an $\mathcal{A}_{\hat{\lambda}}$-module $M$ is {\it strongly holonomic} if every nonempty intersection of $\operatorname{Supp}M$ with a symplectic leaf in $X_0$ is lagrangian in that leaf. Thanks to the assumption that $X\rightarrow X_0$ is strictly semismall, this is equivalent to the condition that $\rho^{-1}(\operatorname{Supp}M) \subset X$ is lagrangian. Set $\hat{\lambda}^-:=\hat{\lambda}-n\theta$ for sufficiently large $n$. The choice of $n$ guarantees that the abelian localization holds for $(\hat{\lambda}^-,-\theta)$.
Consider the subcategory $D^b_{shol}(\mathcal{A}_{\hat{\lambda}}\operatorname{-mod})\subset D^b(\mathcal{A}_{\hat{\lambda}}\operatorname{-mod})$ of all complexes with strongly equivariant homology. It is easy to see that $D$ restricts to an equivalence $D^b_{shol}(\mathcal{A}_{\hat{\lambda}}\operatorname{-mod})\xrightarrow{\sim} D^b_{shol}(\mathfrak{A}_{-\hat{\lambda}}\operatorname{-mod})^{opp}$. On the other hand, $\mathfrak{WC}_{\hat{\lambda}\rightarrow \hat{\lambda}^-}$ restricts to an equivalence $D^b_{shol}(\mathcal{A}_{\hat{\lambda}}\operatorname{-mod})\xrightarrow{\sim} D^b_{shol}(\mathcal{A}_{\hat{\lambda}^-}\operatorname{-mod})$. The following result was proved in \cite[Section 4]{BL}.
\begin{Prop}\label{Prop:WC_vs_D} There is an equivalence $\iota: D^b_{shol}(\mathcal{A}_{\hat{\lambda}^-}\operatorname{-mod})\xrightarrow{\sim} D^b_{shol}(\mathcal{A}_{-\hat{\lambda}}\operatorname{-mod})^{opp}$ preserving the natural $t$-structures such that $\iota\circ \mathfrak{WC}_{\hat{\lambda}\rightarrow \hat{\lambda}^-}=D$. \end{Prop}
We will need a corollary of this proposition.
\begin{Cor}\label{Cor:full_supp} Let $M$ be a simple strongly holonomic $\mathcal{A}$-module. The following are equivalent: \begin{enumerate} \item $\dim \operatorname{Supp}M<\frac{1}{2}\dim X$. \item $M$ is annihilated by a proper ideal of $\mathcal{A}$. \end{enumerate} \end{Cor} \begin{proof} Let us note that the associated graded of a proper ideal in $\mathcal{A}$ is a Poisson ideal in $\C[X_0]$. So its associated variety does not intersect the open leaf in $X_0$. Obviously, the support of $M$ is contained in that associated variety. Since $M$ is strongly holonomic, the implication (2)$\Rightarrow$(1) follows.
Let us prove (1)$\Rightarrow$(2). Consider the $\mathcal{A}=\mathcal{A}_{\hat{\lambda}}$-bimodule $\mathcal{D}:=\mathcal{A}_{\hat{\lambda}^-, N\theta}^{(\theta)} \otimes_{\mathcal{A}_{\hat{\lambda}^-}}\mathcal{A}_{\hat{\lambda},-N\theta}^{(-\theta)}$. Obviously, $H_0(\mathfrak{WC}_{\lambda^-\rightarrow \lambda}\circ \mathfrak{WC}_{\lambda\rightarrow \lambda^-}M)=\mathcal{D}\otimes_{\mathcal{A}_{\hat{\lambda}}}M$. There is a natural homomorphism $\mathcal{D}\rightarrow \mathcal{A}_{\hat{\lambda}}$ (compare to \cite[(5.15)]{BL}) that becomes an isomorphism after microlocalization to the open leaf of $X_0$ because $\rho$ is an isomorphism over the open leaf. So the image is a nonzero ideal in $\mathcal{A}_{\hat{\lambda}}$, say $J$. If $M$ is not annihilated by that ideal, we see that $H_0(\mathfrak{WC}_{\lambda\rightarrow \lambda^-}M)\neq 0$. It follows that $\dim\operatorname{Supp}M=\frac{1}{2}\dim X$. (1)$\Rightarrow$(2) is proved. \end{proof}
We will also need a straightforward corollary of Lemma \ref{Lem:D_higher_small_supp} for strongly holonomic modules.
\begin{Cor}\label{Cor:D_higher_codim} Let $L$ be a simple strongly holonomic $\mathcal{A}_{\hat{\lambda}}$-module. Then $\dim \operatorname{Supp}H^i(DL)<\frac{1}{2}\dim X$ provided $i>0$. \end{Cor}
\subsection{Categories $\mathcal{O}$}\label{SS_Cat_O} Basically, all results of this section can be found in \cite{BLPW}.
First of all, let $\mathcal{A}$ be an associative algebra equipped with a rational action $\alpha$ of $\C^\times$ by algebra automorphism. Then we can consider the eigendecomposition $\mathcal{A}=\bigoplus_{i\in \mathbb{Z}}\mathcal{A}_i$ and set $\mathcal{A}_{\geqslant 0}:=\bigoplus_{i\geqslant 0}\mathcal{A}_i, \mathcal{A}_{>0}:=\bigoplus_{i>0}\mathcal{A}_i$ and $\mathsf{C}_\alpha(A):=\mathcal{A}_{\geqslant 0}/(\mathcal{A}_{\geqslant 0}\cap \mathcal{A}\mathcal{A}_{>0})$. We remark that $\mathsf{C}_\alpha(\mathcal{A})$ is an algebra because $\mathcal{A}\mathcal{A}_{>0}\cap \mathcal{A}_{\geqslant 0}$ is a two-sided ideal in $\mathcal{A}_{\geqslant 0}$. We remark that $\mathsf{C}_\alpha$ is a functor from the category of algebras equipped with a $\C^\times$-action to the category of algebras, we call it the {\it Cartan functor}. This name is justified by the observation that if $\mathcal{A}=U(\mathfrak{g})$ for a semisimple Lie algebra $\mathfrak{g}$ and $\alpha$ comes from a regular one-parametric subgroup of $\operatorname{Ad}(\mathfrak{g})$, then $\mathsf{C}_\alpha(\mathcal{A})$ is the universal enveloping algebra of the corresponding Cartan subalgebra.
Let $X$ be a symplectic resolution of $X_0$ and $\mathcal{A}^\theta$ be a quantization of $X$. We assume that that $X$ is equipped with compatible Hamiltonian $\C^\times$-action $\alpha$. We require that the action on $X$ commutes with the contracting $\C^\times$-action. It is not difficult to see that $\alpha$ lifts to a Hamiltonian $\C^\times$-action on $\mathcal{A}^\theta$ again denoted by $\alpha$. This action preserves the filtration and the $\mathbb{Z}/d\mathbb{Z}$-grading. Let $h_\alpha\in \mathcal{A}$ denote the image of $1$ under the quantum comoment map for $\alpha$.
Consider the category $\mathcal{O}(\mathcal{A})$ consisting of all modules with locally finite action of $h_\alpha,\mathcal{A}_{>0}$. The action of $\mathcal{A}_{>0}$ is automatically locally nilpotent. If $\alpha(\C^\times)$ has finitely many fixed points, this definition coincides with the definition of the category $\mathcal{O}_a$ from \cite[3.2]{BLPW} (this because the algebra $\mathsf{C}_\alpha(\mathcal{A})$ is finite dimensional, which is proved analogously to \cite[3.1.4]{GL}).
The category $\mathcal{O}(\mathcal{A})$ has analogs of Verma modules. More precisely, there is an induction functor $\mathsf{C}_\alpha(\mathcal{A})\operatorname{-mod}\rightarrow \mathcal{O}(\mathcal{A}), M^0\mapsto \Delta(M^0):=\mathcal{A}\otimes_{\mathcal{A}_{\geqslant 0}}M^0$. By a Verma module, we mean $\Delta(M^0)$ with simple $M^0$.
Now let us consider the case when $\alpha(\C^\times)$ has finitely many fixed points. For $\hat{\lambda}\in H^2(X)$ lying in a Zariski open subset, the algebra $\mathsf{C}_\alpha(\mathcal{A}_{\hat{\lambda}})$ is naturally isomorphic to $\C[X^{\alpha(\C^\times)}]$, see \cite[5.1]{BLPW}. In this case, for $p\in X^{\alpha(\C^\times)}$, we will write $\Delta_{\hat{\lambda}}(p)$ for the corresponding Verma module.
Now let us define the category $\mathcal{O}$ for $\mathcal{A}^\theta$ following \cite[3.3]{BLPW}. Let $Y$ stand for the contracting locus for $\alpha$, i.e, the subvariety of all points $x\in X$ such that $\lim_{t\rightarrow 0} \alpha(t)x$ exists (and automatically lies in $X^{\alpha(\C^\times)}$). We remark that $Y$ is a lagrangian subvariety in $X$ stable with respect to the contracting $\C^\times$-action. We also remark that $Y=\rho^{-1}(Y_0)$, where $Y_0$ stands for the contracting locus of the $\C^\times$-action on $X_0$ induced by $\alpha$. This is because, under our assumptions on $X^{\alpha(\C^\times)}$, the fixed point set $X_0^{\alpha(\C^\times)}$ is a single point and because $\rho$ is proper. We remark that if $X$ is strictly semismall, then every module in $\mathcal{O}(\mathcal{A})$ is strongly holonomic. This is because $Y=\rho^{-1}(Y_0)$ is lagrangian.
By definition, the category $\mathcal{O}(\mathcal{A}^\theta)$ consists of all modules $\M\in \mathcal{A}^\theta\operatorname{-mod}_Y$ that admit a global good $h_\alpha$-stable filtration.
We write $D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}}), D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}}^\theta)$ for the categories of all complexes (in the corresponding derived categories) with homology in the categories $\mathcal{O}$.
Let us summarize some properties of categories $\mathcal{O}(\mathcal{A}^\theta),\mathcal{O}(\mathcal{A})$.
\begin{Prop}\label{Prop:O_prop} Assume that the action $\alpha$ has finitely many fixed points. \begin{enumerate} \item We have $\Gamma^\theta(\mathcal{O}(\mathcal{A}^\theta))\subset \mathcal{O}(\mathcal{A}),\Loc^\theta(\mathcal{O}(\mathcal{A}))\subset \mathcal{O}(\mathcal{A}^\theta), R\Gamma^\theta(D^b_{\mathcal{O}}(\mathcal{A}^\theta\operatorname{-mod}))\subset D^b_{\mathcal{O}}(\mathcal{A}\operatorname{-mod}),$ $L\operatorname{Loc}^\theta(D^b_{\mathcal{O}}(\mathcal{A}\operatorname{-mod}))
\subset D^b_{\mathcal{O}}(\mathcal{A}^\theta\operatorname{-mod})$. \item The functor $\mathcal{A}_{\hat{\lambda},\chi}^{\theta}\otimes_{\mathcal{A}_{\hat{\lambda}}^\theta}\bullet$ maps $\mathcal{O}(\mathcal{A}_{\hat{\lambda}}^\theta)$ to $\mathcal{O}(\mathcal{A}_{\hat{\lambda}+\chi}^\theta)$. \item The categories $\mathcal{O}(\mathcal{A}),\mathcal{O}(\mathcal{A}^\theta)$ are length categories, i.e., all objects have finite length. \item $\mathcal{O}(\mathcal{A})\subset \mathcal{A}\operatorname{-mod}, \mathcal{O}(\mathcal{A}^\theta)\subset \mathcal{A}^\theta\operatorname{-mod}$ are Serre subcategories. \item All modules in $\mathcal{O}(\mathcal{A}), \mathcal{O}(\mathcal{A}^\theta)$ can be made weakly $\alpha(\C^\times)$-equivariant. \item Conversely, all weakly $\alpha(\C^\times)$-equivariant modules in $\mathcal{A}\operatorname{-mod}_{Y_0}$ (resp., $\mathcal{A}^\theta\operatorname{-mod}_{Y}$) are in $\mathcal{O}(\mathcal{A})$ (resp., in $\mathcal{O}(\mathcal{A}^\theta)$). \end{enumerate} \end{Prop} \begin{proof} (1)-(4) were established in \cite[Section 3]{BLPW}. The proof of (5) for $\mathcal{O}(\mathcal{A})$ is standard: we decompose a module in $\mathcal{O}(\mathcal{A})$ into the direct sum of submodules according to the class of eigenvalues of $h_\alpha$ modulo $\mathbb{Z}$. It is easy to introduce a weakly equivariant structure on each summand.
Since the localization functor is $\alpha(\C^\times)$-equivariant, we see that $\operatorname{Loc}^\theta(M)$ can be made weakly $\alpha(\C^\times)$-equivariant. So (5) for $\mathcal{O}(\mathcal{A}_{\hat{\lambda}}^\theta)$ is true provided the abelian localization holds for $\hat{\lambda}$. So it also holds for $\hat{\lambda}+\chi$ for any integral $\chi$. This is because of (2) and the observation that $\mathcal{A}_{\hat{\lambda},\chi}^\theta$ is $\alpha(\C^\times)$-equivariant. Now our claim follows from Proposition \ref{Prop:abel_loc1}.
Let us prove (6). Again, thanks to the equivariance of the localization functor, it is enough to prove the claim for $\mathcal{A}$. Pick a weakly $\alpha(\C^\times)$-equivariant module $M\in \mathcal{A}\operatorname{-mod}_{Y_0}$ and let $M=\bigoplus_{i\in \mathbb{Z}}M_i$ be the eigen-decomposition for the $\alpha(\C^\times)$-action. Since $M$ is supported on $Y_0$, we see that $M_i=0$ for $i\gg 0$. Since $M$ is finitely generated, it is easy to see that all weight spaces are finite dimensional. Our claim follows. \end{proof}
Now let us discuss highest weight structure on $\mathcal{O}(\mathcal{A}^\theta)$. Consider the so called geometric order on $X^{\alpha(\C^\times)}$ defined as follows. For $p\in X^{\alpha(\C^\times)}$ set $Y_p:=\{x\in X| \lim_{t\rightarrow 0}\alpha(t)x=p\}$, the contracting locus of $p$ so that we have $Y=\bigsqcup_{p\in X^{\alpha(\C^\times)}}Y_p$. We consider the relation $\leqslant^\theta$ on $X^{\alpha(\C^\times)}$ that is the transitive order of the pre-order $p\leqslant^\theta p'$ if $p\in \overline{Y}_{p'}$. We write $Y_{\leqslant p'}$ for $\bigsqcup_{p\leqslant^\theta p'}Y_p$ and $Y_{<p'}$ for $Y_{\leqslant p'}\setminus Y_{p'}$. It was shown in \cite[Section 5]{BLPW} that $\mathcal{O}(\mathcal{A}^\theta)$ is a highest weight category with respect to this order. The standard object $\Delta_{\hat{\lambda}}^\theta(p)$ corresponding to $p$ is a unique indecomposable projective in $\mathcal{O}(\mathcal{A}^\theta)\cap \mathcal{A}^\theta\operatorname{-mod}_{Y_{\leqslant p}}$ that is not contained in $\mathcal{A}^\theta\operatorname{-mod}_{Y_{<p}}$.
If the abelian localization holds for $(\hat{\lambda},\theta)$, then $\mathcal{O}(\mathcal{A}_{\hat{\lambda}})$ is also a highest weight category. Further, if we assume, in addition, that $\hat{\lambda}$ lies in a Zariski open subset, then the standard objects in $\mathcal{O}(\mathcal{A}_{\hat{\lambda}})$ are precisely the Verma modules, see \cite[5.2]{BLPW}.
Now let us examine a connection between various derived categories associated to $\mathcal{O}(\mathcal{A}_{\hat{\lambda}}),\mathcal{O}(\mathcal{A}_{\hat{\lambda}}^\theta)$. This is basically the appendix to \cite{BLPW} by the author.
\begin{Lem}\label{Lem:Ext_coinc} The natural functor $D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}^\theta))\rightarrow D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}}^\theta\operatorname{-mod})$ is an equivalence. Furthermore, if $\hat{\lambda}\in H^2(X)$ lies in a suitable Zariski open subset and the abelian localization holds for $(\hat{\lambda},\theta)$, then $D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}))\rightarrow D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}}^\theta)$ is an equivalence. \end{Lem}
We are going to identify $D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}^\theta))$ with $D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}}^\theta\operatorname{-mod})$ and $D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}))$ with $D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}})$.
In the case of $\mathcal{A}_{\lambda}(n,r)$ we consider the categories $\mathcal{O}$ defined for the action of a generic one-dimensional subtorus in $\GL(r)\times \C^\times$. Then the fixed point are in one-to-one correspondence with the $r$-multipartitions of $n$. Different choices of a generic torus may give rise to different categories $\mathcal{O}$.
\subsection{Harish-Chandra bimodules}\label{SS_HC_bimod} Let us recall some basics on Harish-Chandra (shortly, HC) bimodules. Let $\mathcal{A}^\theta,\mathcal{A}'^\theta$ be two quantizations of $X$ in the sense of Subsection \ref{SS_quant} and $\mathcal{A},\mathcal{A}'$ be their global sections. For technical reasons, we make a restriction on the contracting $\C^\times$-action $\beta$ on $X$. Namely, we require that there are commuting actions $\underline{\beta}$ and $\gamma$ of $\C^\times$ on $X$ with the following properties: \begin{itemize} \item $\beta=\beta'^d \gamma$, \item and $\gamma$ is Hamiltonian. \end{itemize} Then $\gamma$ lifts to $\mathcal{A}^\theta,\mathcal{A}'^\theta$ and these sheaves acquire new filtrations, coming from $\underline{\beta}$. In the remainder of the section we consider $\mathcal{A},\mathcal{A}'$ with these new filtrations. Let us point out that $X=\M^\theta(v,w)$ does satisfy our additional condition: we can take $\underline{\beta}$ induced by the action $t.(r,\alpha)=(r,t^{-1}\alpha)$.
Let us recall the definition of a HC $\mathcal{A}$-$\mathcal{A}'$-bimodule. By definition, this is a finitely generated $\mathcal{A}$-$\mathcal{A}'$-bimodule $\mathcal{B}$ with a filtration that is compatible with those on $\mathcal{A},\mathcal{A}'$ such that $\operatorname{gr}\mathcal{B}$ is a $\C[X_0]$-module. By a homomorphism of Harish-Chandra bimodules we mean a bimodule homomorphism. The category of HC $\mathcal{A}'$-$\mathcal{A}$-bimodules is denoted by $\HC(\mathcal{A}'\text{-}\mathcal{A})$. We also consider the full subcategory $D^b_{HC}(\mathcal{A}'-\mathcal{A})$ of the derived category of $\mathcal{A}'$-$\mathcal{A}$-bimodules with Harish-Chandra homology.
By the associated variety of a HC bimodule $\mathcal{B}$ (denoted by $\operatorname{V}(\mathcal{B})$) we mean the support in $X_0$ of the coherent sheaf $\operatorname{gr}\mathcal{B}$, where the associated graded is taken with respect to a filtration as in the previous paragraph (below we call such filtrations {\it good}). It is easy to see that $\operatorname{gr}\mathcal{B}$ is a Poisson $\C[X_0]$-module so $\operatorname{V}(\mathcal{B})$ is the union of symplectic leaves.
Using associated varieties and the finiteness of the number of the leaves it is easy to prove the following standard result.
\begin{Lem}\label{Lem:fin_length} Any HC bimodule has finite length. \end{Lem}
For $\mathcal{B}^1\in \HC(\mathcal{A}'\text{-}\mathcal{A})$ and $\mathcal{B}^2\in \HC(\mathcal{A}''\text{-}\mathcal{A}')$ we can take their tensor product $\mathcal{B}^2\otimes_{\mathcal{A}'}\mathcal{B}^1$. This is easily seen to be a HC $\mathcal{A}''$-$\mathcal{A}$-bimodule. Also the derived tensor product of the objects from $D^b_{HC}(\mathcal{A}''\text{-}\mathcal{A}'),D^b_{HC}(\mathcal{A}'\text{-}\mathcal{A})$ lies in $D^-_{HC}(\mathcal{A}''\text{-}\mathcal{A})$ (and in $D^b_{HC}(\mathcal{A}''\text{-}\mathcal{A})$ provided $\mathcal{A}'$ has finite homological dimension).
\subsection{Quantum slices}\label{SS_quant_slice} Let $X\rightarrow X_0$, where the contracting $\C^\times$-action satisfies the additional assumptions imposed in the previous subsection. Pick a quantization $\mathcal{A}^\theta$ of $X$. We write $\mathcal{A}_\hbar^\theta$ for the $\hbar$-adic completion of the Rees sheaf $R_\hbar(\mathcal{A}^\theta)$ of $\mathcal{A}^\theta$ (with respect to the action $\underline{\beta}$ so that $t.\hbar=t\hbar$). We write $\mathcal{A}_\hbar$ for the algebra of the global sections of $\mathcal{A}^\theta_\hbar$, this is the $\hbar$-adic completion of the Rees algebra of $\mathcal{A}$.
Pick a point $x\in X_0$. Then we can form the completions $\mathcal{A}_{\hbar}^{\wedge_x}$ of $\mathcal{A}_\hbar$ at $x$ and $\mathcal{A}_\hbar^{\theta\wedge_x}$ of $\mathcal{A}_\hbar^{\wedge_x}$ at $\rho^{-1}(x)$. Consider the homogenized Weyl algebra $\mathbb{A}_\hbar$ for the tangent space to the symplectic leaf in $x$. Then we have an embedding $\mathbb{A}_\hbar^{\wedge_0}\hookrightarrow \mathcal{A}_\hbar^{\wedge_x}$, see \cite[2.1]{W-prim} for a proof. It was checked in \cite[2.1]{W-prim} that we have the tensor product decomposition $\mathcal{A}_{\hbar}^{\wedge_x}=\mathbb{A}_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}\mathcal{A}_\hbar'$ that lifts the decomposition $X^{\wedge_x}\cong D\times X'$ mentioned in Subsection \ref{SS_leaves}. The algebra $\mathcal{A}_\hbar'$ is independent of the choices up to an isomorphism, as was explained in \cite[2.1]{W-prim}. For a similar reason, we have a decomposition $\mathcal{A}_{\hbar}^{\theta\wedge_x}\cong \mathbb{A}_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}{\mathcal{A}_\hbar^{\theta}}'$, where $\mathcal{A}_\hbar'^{\theta}$ is a formal quantization of the slice $X'$. By the construction, $\mathcal{A}_\hbar'=\Gamma(\mathcal{A}_\hbar'^{\theta})$.
Now suppose that $\mathcal{A}^\theta$ has period $\hat{\lambda}$. Then the period of $\mathcal{A}_\hbar^{\theta\wedge_x}$ coincides with the image $\hat{\lambda}'$ of $\hat{\lambda}$ under the natural map between the \v{C}ech-De Rham cohomology groups $H^2(X)\rightarrow H^2(X^{\wedge_x})=H^2(X')$. It follows that ${\mathcal{A}'^\theta_\hbar}$ also has period $\hat{\lambda}'$.
Assume that $X'$ is again equipped with a contracting $\C^\times$ action with the same integer $d$ satisfying the additional restriction in Subsection \ref{SS_HC_bimod}, this holds in the quiver variety setting, for example. So $X'$ is the formal neighborhood at $0$ of a conical symplectic resolution $\underline{X}$. The formal quantization $\mathcal{A}'^\theta_\hbar$ is homogeneous (by the results of \cite[2.3]{quant}). It follows that it is obtained by completion at $0$ of $\underline{\mathcal{A}}_\hbar^\theta$ for some quantization $\underline{\mathcal{A}}^\theta$ of $\underline{X}$.
So the product $\mathcal{A}_{\hbar}^{\theta\wedge_x}=\mathbb{A}_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}{\mathcal{A}_\hbar'^{\theta}}$ comes equipped with a $\C^\times$-action by algebra automorphisms satisfying $t.\hbar=t\hbar$. On the other hand, the $\C^\times$-action on $\mathcal{A}_\hbar^{\theta}$ produces a derivation of $\mathcal{A}_\hbar^{\theta\wedge_x}$. The difference between this derivation and the one produced by the $\C^\times$-action on $\mathcal{A}_\hbar^{\theta\wedge_x}$ has the form $\frac{1}{\hbar}[a,\cdot]$ for some $a\in \mathcal{A}_\hbar^{\wedge_x}$, see \cite[Lemma 5.7]{BL}.
Let us now elaborate on the Gieseker case, which was already considered (in a more general quiver variety case) in \cite[5.4]{BL}. Recall that the symplectic leaves in $\bar{\M}(n,r)$ are parameterized by partitions $(n_1,\ldots,n_k)$ with $n_1+\ldots+n_k\leqslant n$. For $x$ in the corresponding leaf, we have $$\underline{\bar{\mathcal{A}}}_{\lambda}(n,r)= \bar{\mathcal{A}}_{\lambda}(n_1,r)\otimes\bar{\mathcal{A}}_{\lambda}(n_2,r)\otimes\ldots\otimes\bar{\mathcal{A}}_{\lambda}(n_k,r).$$ Also we have a similar decomposition for $\underline{\bar{\mathcal{A}}}_{\lambda}^\theta(n,r)$.
\subsection{Restriction functors for HC bimodules}\label{SS_restr_fun} In this subsection, we will recall restriction functors $\bullet_{\dagger,x}:\HC(\mathcal{A}'\text{-}\mathcal{A})\rightarrow \HC(\underline{\mathcal{A}}'\text{-}\underline{\mathcal{A}})$, where $\underline{\mathcal{A}}',\underline{\mathcal{A}}$ are slice algebras for $\mathcal{A}',\mathcal{A}$, respectively (it is in order for these functors to behave nicely that we have introduced our additional technical assumption on the contracting $\C^\times$-action). These functors were defined in \cite[Section 5]{BL} in the case when $\mathcal{A}',\mathcal{A}$ are of the form $\mathcal{A}_\lambda(v,w)$, but the general case (under the assumption that the slice to $x$ is conical and the contracting action satisfies the additional assumption) is absolutely analogous. We will need several facts about the restriction functors established in \cite[Section 5]{BL}.
\begin{Prop}\label{Prop:dagger_prop} The following is true. \begin{enumerate} \item The functor $\bullet_{\dagger,x}$ is exact and $H^2(X)$-linear. \item The associated variety $\operatorname{V}(\mathcal{B}_{\dagger,x})$ is uniquely characterized by the property $D\times \operatorname{V}(\mathcal{B}_{\dagger,x})^{\wedge_0}=\operatorname{V}(\mathcal{B})^{\wedge_x}$. \item The functor $\bullet_{\dagger,x}$ intertwines the Tor's: for $\mathcal{B}^1\in \HC(\mathcal{A}''\text{-}\mathcal{A}')$ and $\mathcal{B}^2\in \HC(\mathcal{A}'\text{-}\mathcal{A})$ we have a natural isomorphism $\operatorname{Tor}_i^{\mathcal{A}'}(\mathcal{B}^1,\mathcal{B}^2)_{\dagger,x}=\operatorname{Tor}_i^{\underline{\mathcal{A}}'}(\mathcal{B}^1_{\dagger,x}, \mathcal{B}^2_{\dagger,x})$. \end{enumerate} \end{Prop} \section{Parabolic induction}\label{S_parab_ind} The first goal of this section is to elaborate on the Cartan functor that appeared in Subsection \ref{SS_Cat_O}. There we were basically dealing with the case when the Hamiltonian action only has finitely many fixed points. Here we consider a more general case and our goal is to better understand the structure of $\mathsf{C}_\alpha(\mathcal{A})$. The second goal is to introduce parabolic induction for categories $\mathcal{O}$.
Not surprisingly, the case of actions on smooth symplectic (even non-affine) varieties is easier to understand. We extend the definition of $\mathsf{C}_\alpha$ to sheaves in Subsection \ref{S_Cart_fun}. There we show that if $\mathcal{A}^\theta$ is a quantization of $X$, then $\mathsf{C}_\alpha(\mathcal{A}^\theta)$ is a quantization of $X^{\alpha(\C^\times)}$. In Subsection \ref{SS_Ca_sheaf_vs_alg} we compare $\mathsf{C}_\alpha(\mathcal{A})$ (an algebra which is hard to understand directly) with $\Gamma(\mathsf{C}_\alpha(\mathcal{A}^\theta))$ in the case of symplectic resolutions. We will see that, for a Zariski generic quantization parameter, the two algebras coincide. Next, in Subsection \ref{SS_Ca_param}, we determine the quantization parameter (=period) of $\mathsf{C}_\alpha(\mathcal{A}^\theta)$ from that of $\mathcal{A}^\theta$. Subsection \ref{SS_Ca_Gies} applies this result to some particular action $\alpha$ in the Gieseker case.
Finally, in Subsection \ref{SS_parab_induc} we introduce parabolic induction.
\subsection{Cartan functor for sheaves}\label{S_Cart_fun} We start with a symplectic variety $X$ equipped with a $\C^\times$-action that rescales the symplectic form and also with a commuting Hamiltonian action $\alpha$. Of course, it still makes sense to speak about quantizations of $X$ that are Hamiltonian for $\alpha$.
We want to construct a quantization $\mathsf{C}_\alpha(\mathcal{A}^{\theta})$ of $X^{\alpha(\C^\times)}$ starting from a Hamiltonian quantization $\mathcal{A}^\theta$ of $X$.
The variety $X$ can be covered by $(\C^{\times})^{2}$-stable open affine subvarieties. Pick such a subvariety $X'$ with $(X')^{\alpha(\C^\times)}\neq \varnothing$. Define $\mathsf{C}_\alpha(\mathcal{A}^\theta)(X')$ as $\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$. We remark that the open subsets of the form $(X')^{\alpha(\C^\times)}$ form a base of the Zariski topology on $X^{\alpha(\C^\times)}$.
The following proposition defines the sheaf $\mathsf{C}_\alpha(\mathcal{A}^\theta)$.
\begin{Prop}\label{Prop:A0} The following holds. \begin{enumerate} \item Suppose that the contracting $\alpha$-locus in $X'$ is a complete intersection defined by homogeneous (for $\alpha(\C^\times)$) equations of positive weight. Then the algebra $\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$ is a quantization of $\C[X'^{\alpha(\C^\times)}]$. \item There is a unique sheaf $\mathsf{C}_\alpha(\mathcal{A}^\theta)$ of $X^{\alpha(\C^\times)}$ whose sections on $X'^{\alpha(\C^\times)}$ with $X'$ as above coincide with $\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$. This sheaf is a quantization of $X^{\alpha(\C^\times)}$. \item If $X'$ is a $(\C^\times)^2$-stable affine subvariety, then $\mathsf{C}_\alpha(\mathcal{A}^\theta)(X')=\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$. \end{enumerate} \end{Prop} \begin{proof} Let us prove (1). To simplify the notation, we write $\mathcal{A}$ for $\mathcal{A}^\theta(X')$. The algebra $\mathcal{A}$ is Noetherian because it is complete and separated with respect to a filtration whose associated graded is Noetherian. Let us show that $\operatorname{gr}\mathcal{A}\mathcal{A}_{>0}=\C[X']\C[X']_{>0}$, this will complete the proof of (1).
In the proof it is more convenient to deal with $\hbar$-adically completed homogenized quantizations. Namely, let $\mathcal{A}_\hbar$ stand for the $\hbar$-adic completion of $R_\hbar(\mathcal{A})$. The claim that $\operatorname{gr}\mathcal{A}\mathcal{A}_{>0}=\C[X']\C[X']_{>0}$ is equivalent to the condition that $\mathcal{A}_\hbar\mathcal{A}_{\hbar,>0}$ is $\hbar$-saturated meaning that $\hbar a\in \mathcal{A}_{\hbar}\mathcal{A}_{\hbar,>0}$ implies that $a\in \mathcal{A}_{\hbar}\mathcal{A}_{\hbar,>0}$.
Recall that we assume that there are $\alpha$-homogeneous elements $f_1,\ldots,f_k\in \C[X']_{>0}$ that form a regular sequence generating the ideal $\C[X']\C[X']_{>0}$. We can lift those elements to homogeneous $\tilde{f}_1,\ldots, \tilde{f}_k\in \mathcal{A}_{\hbar,>0}$. We claim that these elements still generate $\mathcal{A}_\hbar\mathcal{A}_{\hbar,>0}$. Indeed, it is enough to check that $\mathcal{A}_{\hbar,>0}\subset \operatorname{Span}_{\mathcal{A}_\hbar}(\tilde{f}_1,\ldots,\tilde{f}_k)$. For a homogeneous element $f\in \mathcal{A}_{\hbar,>0}\setminus \hbar\mathcal{A}_{\hbar}$ we can find homogeneous elements $g_1,\ldots,g_k$ such that $f-\sum_{i=1}^k g_i\tilde{f}_i$ still has the same $\alpha(\C^\times)$-weight and is divisible by $\hbar$. Divide by $\hbar$ and repeat the argument. Since the $\hbar$-adic topology is complete and separated, we see that $f\in \operatorname{Span}_{\mathcal{A}_\hbar}(\tilde{f}_1,\ldots,\tilde{f}_k)$. So it is enough to check that $\operatorname{Span}_{\mathcal{A}_\hbar}(\tilde{f}_1,\ldots,\tilde{f}_k)$ is $\hbar$-saturated.
Pick elements $\tilde{h}_1,\ldots,\tilde{h}_k$ such that $\sum_{j=1}^k \tilde{h}_j\tilde{f}_j$ is divisible by $\hbar$. Let $h_j\in \C[X']$ be congruent to $\tilde{h}_j$ modulo $\hbar$ so that $\sum_{j=1}^k h_j f_j=0$. Since $f_1,\ldots,f_k$ form a regular sequence, we see that there are elements $h_{ij}\in \C[X']$ such that $h_{j'j}=-h_{jj'}$ and $h_j=\sum_{\ell=1}^k h_{j\ell}f_\ell$. Lift the elements $h_{jj'}$ to $\tilde{h}_{jj'}\in \mathcal{A}_\hbar$ with $\tilde{h}_{jj'}=-\tilde{h}_{j'j}$. So we have $\tilde{h}_j=\sum_{\ell=1}^k \tilde{h}_{j\ell}\tilde{f}_\ell+\hbar\tilde{h}'_j$ for some $\tilde{h}'_j\in \mathcal{A}_\hbar$. It follows that $\sum_{j=1}^k \tilde{h}_j \tilde{f}_j= \hbar\sum_{j=1}^k \tilde{h}'_j \tilde{f}_j+ \sum_{j,\ell=1}^k \tilde{h}_{j\ell}\tilde{f}_\ell\tilde{f}_j$. But $\sum_{j,\ell=1}^k \tilde{h}_{j\ell}\tilde{f}_\ell\tilde{f}_j= \sum_{j<\ell} \tilde{h}_{j\ell}[\tilde{f}_\ell,\tilde{f}_j]$. The bracket is divisible by $\hbar$. But $\frac{1}{\hbar}[\tilde{f}_\ell, \tilde{f}_j]$ is still in $\mathcal{A}_{\hbar,>0}$ and so in $\operatorname{Span}_{\mathcal{A}_\hbar}(\tilde{f}_1,\ldots,\tilde{f}_k)$. This finishes the proof of (1).
Let us proceed to the proof of (2). Let us show that we can choose a covering of $X^{\alpha(\C^\times)}$ by $X'^{\alpha(\C^\times)}$, where $X'$ is as in (1). This is easily reduced to the affine case. Here the existence of such a covering is deduced from the Luna slice theorem applied to a fixed point for $\alpha$. In more detail, for a fixed point $x$, we can choose an open affine neighborhood $U$ of $x$ in $X\quo \alpha(\C^\times)$ with an \'{e}tale morphism $U\rightarrow T_xX\quo \alpha(\C^\times)$ such that $\pi^{-1}(U)\cong U\times_{T_xX\quo \alpha(\C^\times)}T_xX$, where $\pi$ stands for the quotient morphism for the action $\alpha$. The subset $\pi^{-1}(U)$ then obviously satisfies the requirements in (1).
It is easy to see that the algebras $\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$ form a presheaf with respect to the covering $X'^{\alpha(\C^\times)}$ (obviously, if $X',X''$ satisfy our assumptions, then their intersection does). Since the subsets $X'^{\alpha(\C^\times)}$ form a base of topology on $X^{\alpha(\C^\times)}$, it is enough to show that they form a sheaf with respect to the covering. This is easily deduced from the two straightforward claims: \begin{itemize} \item $\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$ is complete and separated with respect to the filtration (here we use an easy claim that, being finitely generated, the ideal $\mathcal{A}^\theta(X')\mathcal{A}^{\theta}(X')_{>0}$ is closed). \item The algebras $\operatorname{gr}\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))=\C[X'^{\alpha(\C^\times)}]$ do form a sheaf -- the structure sheaf $\mathcal{O}_{X^{\alpha(\C^\times)}}$. \end{itemize} The proof of (2) is now complete.
To prove (3) it is enough to assume that $X$ is affine. Let $\pi$ denote the categorical quotient map $X\rightarrow X\quo \alpha(\C^\times)$. It is easy to see that, for every open $(\C^\times)^2$-stable affine subvariety $X'$ that intersects $X^{\alpha(\C^\times)}$ non-trivially, and any point $x\in X'^{\alpha(\C^\times)}$, there is some $\C^\times$-stable open affine subvariety $Z\subset X\quo \alpha(\C^\times)$ with $x\in \pi^{-1}(Z)\subset X'$. So we can assume, in addition, that all covering affine subsets $X^i$ are of the form $\pi^{-1}(?)$. Moreover, we can assume that they are all principal (and so are given by non-vanishing of $\alpha(\C^\times)$-invariant and $\C^\times$-semiinvariant elements of $\mathcal{A}^\theta(X)$). Then all algebras $\mathsf{C}_\alpha(\mathcal{A}^\theta(X'))$ are obtained from $\mathsf{C}_\alpha(\mathcal{A}^\theta(X))$ by microlocalization. Our claim follows from standard properties of microlocalization. \end{proof}
\subsection{Comparison between algebra and sheaf levels}\label{SS_Ca_sheaf_vs_alg} Now let us suppose that $X$ is a conical symplectic resolution of $X_0$. We write $\mathcal{A}^\theta_\lambda$ for the quantization of $X$ corresponding to $\lambda$ and $\mathcal{A}_\lambda$ for its algebra of global sections. By the construction, for any $\lambda\in H^2(X)$, there is a natural homomorphism $\mathsf{C}_\alpha(\mathcal{A}_{\lambda})\rightarrow \Gamma(\mathsf{C}_\alpha(\mathcal{A}_\lambda^\theta))$. Our goal in this section is to prove the following result.
\begin{Prop}\label{Prop:A0_descr} Suppose that $H^i(X^{\alpha(\C^\times)},\mathcal{O})=0$ for $i>0$. There is a Zariski open subset subset $Z\subset H^2(X)$ such that the homomorphism $\mathsf{C}_\alpha(\mathcal{A}_{\lambda})\rightarrow \Gamma(\mathsf{C}_\alpha(\mathcal{A}^\theta_\lambda))$ is an isomorphism provided $\lambda\in Z$. \end{Prop} \begin{proof} Let $\tilde{X}$ be the universal deformation of $X$ over $H^2(X)$ and $\tilde{X}_0$ be its affinization. Consider the natural homomorphism $\mathsf{C}_\alpha(\C[\tilde{X}_0])\rightarrow \C[\tilde{X}^{\alpha(\C^\times)}]$. It is an isomorphism outside of $\mathcal{H}_\C$ (the union of singular hyperplanes) since $\tilde{X}\rightarrow \tilde{X}_0$ is an isomorphism precisely outside that locus. Now consider the canonical quantization $\tilde{\mathcal{A}}^\theta$ of $\tilde{X}$. Similarly to the previous section, $\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta})$ is a quantization of $\tilde{X}^{\alpha(\C^\times)}$. The cohomology vanishing for $X^{\alpha(\C^\times)}$ implies that for $\tilde{X}^{\alpha(\C^\times)}$. It follows that $\operatorname{gr} \Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^\theta))=\C[\tilde{X}^{\alpha(\C^\times)}]$. Also there is a natural epimorphism $\mathsf{C}_\alpha(\C[\tilde{X}_0])\rightarrow \operatorname{gr}\mathsf{C}_\alpha(\tilde{\mathcal{A}})$ and a natural homomorphism $\operatorname{gr}\mathsf{C}_\alpha(\tilde{\mathcal{A}})\rightarrow \operatorname{gr} \Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta}))$. The resulting homomorphism $\operatorname{gr}\mathsf{C}_\alpha(\tilde{\mathcal{A}})\rightarrow \operatorname{gr}\Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta}))$ is, on one hand, the associated graded of the homomorphism $\mathsf{C}_\alpha(\tilde{\mathcal{A}})\rightarrow\Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta}))$ and on the other hand, an isomorphism over the complement of $\mathcal{H}_\C$. We deduce that the supports of the associated graded modules of the kernel and the cokernel of $\mathsf{C}_\alpha(\tilde{\mathcal{A}})\rightarrow\Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta}))$ are supported on $\mathcal{H}_{\C}$ as $\C[H^2(X)]$-modules. It follows that the support of the kernel and of the cokernel of $\mathsf{C}_\alpha(\tilde{\mathcal{A}})\rightarrow\Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta}))$ are Zariski closed subvarieties of $H^2(X)$. We note that $\Gamma(\mathsf{C}_\alpha(\tilde{\mathcal{A}}^{\theta}))$ is flat over $H^2(X)$ and the specialization at $\lambda$ coincides with $\Gamma(\mathsf{C}_\alpha(\mathcal{A}_{\lambda}^{\theta}))$, this is because of the vanishing assumption on the structure sheaf. So $\mathsf{C}_\alpha(\tilde{\mathcal{A}})$ is generically flat over $H^2(X)$, while the specialization at $\lambda$ always coincides with $\mathsf{C}_\alpha(\mathcal{A}_\lambda)$. This implies the claim of the proposition. \end{proof}
\subsection{Correspondence between parameters}\label{SS_Ca_param} Our next goal is to understand how to recover the periods of the direct summands $\mathsf{C}_\alpha(\mathcal{A}^{\theta})$ from that of $\mathcal{A}^\theta$. We will assume that $X^{\alpha(\C^\times)}$ satisfies the cohomology vanishing conditions on the structure sheaf, but we will not require that of $X$, the period map still makes sense, see \cite{BK}. Consider the decomposition $X^{\alpha(\C^\times)}=\bigsqcup_i X^0_i$ into connected components. Let $Y_i$ denote the contracting locus of $X^0_i$ and let $\mathcal{A}_{i}^{\theta 0}$ be the restriction of $\mathsf{C}_\alpha(\mathcal{A}^\theta)$ to $X^0_i$. To determine the period of $\mathcal{A}_i^{\theta 0}$, we will quantize $Y_i$ and then use results from \cite{BGKP} on quantizations of line bundles on lagrangian subvarieties.
First of all, let us consider the case when $X$ is affine and so is quantized by a single algebra, $\mathcal{A}$. We will quantize the contracting locus $Y$ by a single $\mathcal{A}$-$\mathcal{A}^0$-bimodule (where $\mathcal{A}^0$ stands for $\mathsf{C}_\alpha(\mathcal{A})$), this bimodule is $\mathcal{A}/\mathcal{A}\mathcal{A}_{>0}$.
\begin{Lem}\label{Lem:repel_quant_affine} Under the above assumptions, the associated graded of $\mathcal{A}/\mathcal{A}\mathcal{A}_{>0}$ is $\C[Y]$. \end{Lem} \begin{proof} This was established in the proof of Proposition \ref{Prop:A0}. More precisely, the case when $Y$ is a complete intersection given by $\alpha(\C^\times)$-semiinvariant elements of positive weight follows from the proof of assertion (1), while the general case follows similarly to the proof of (3). \end{proof}
Now let us consider the non-affine case. Let us cover $X\setminus \bigcup_{k\neq i}X^0_k$ with $(\C^\times)^2$-stable open affine subsets $X^j$. We may assume that $X^j$ either does not intersect $Y_i$ or its intersection with $Y_i$ is of the form $\pi_i^{-1}(X^j\cap X_i^0)$, where $\pi_i:Y_i\rightarrow X_i^0$ is the projection. For this we first choose some covering by $(\C^\times)^2$-stable open affine subsets. Then we delete $Y_i\setminus \pi_i^{-1}(X^j\cap X_i^0)$ from each $X^j$, we still have a covering. We cover the remainder of each $X^j$ by subsets that are preimages of open affine subsets on $X^j\quo \alpha(\C^\times)$, it is easy to see that this covering has required properties. Let us replace $X$ with the union of $X^j$ that intersect $Y_i$.
After this replacement, we can quantize $Y_i$ by a $\mathcal{A}^\theta$-$\mathsf{C}_\alpha(\mathcal{A}^\theta)$-bimodule. We have natural $\mathcal{A}^\theta(X^j)$-$\mathsf{C}_\alpha(\mathcal{A}^\theta)(X^j\cap X_i^0)$-bimodule structures on $\mathcal{A}^\theta(X^j)/\mathcal{A}^\theta(X^j)\mathcal{A}^\theta(X^j)_{>0}$ and glue the bimodules corresponding to different $j$ together along the intersections $X^i\cap X^j$ (we have homomorphisms $\mathcal{A}^\theta(X^i)\rightarrow \mathcal{A}^\theta(X^i\cap X^j)$ that give rise to $\mathcal{A}^\theta(X^i)/\mathcal{A}^\theta(X^i)\mathcal{A}^\theta(X^i)_{>0}\rightarrow \mathcal{A}^\theta(X^i\cap X^j)/\mathcal{A}^\theta(X^i\cap X^j)\mathcal{A}^\theta(X^i\cap X^j)_{>0}$ and to $\mathsf{C}_\alpha(\mathcal{A}^\theta(X^i))\rightarrow \mathsf{C}_\alpha(\mathcal{A}^\theta(X^i\cap X^j))$). Similarly to the proof of (2) in Proposition \ref{Prop:A0}, we get a sheaf of $\mathcal{A}^\theta$-$\mathsf{C}_\alpha(\mathcal{A}^{\theta})$-bimodules on $Y_i$ that we denote by $\mathcal{A}^\theta/\mathcal{A}^\theta \mathcal{A}^\theta_{>0}$. The following is a direct consequence of the construction.
\begin{Lem}\label{Lem:repel_gener} The associated graded of $\mathcal{A}^\theta/\mathcal{A}^{\theta}\mathcal{A}_{>0}^\theta$ coincides with the $\mathcal{O}_X$-$\mathcal{O}_{X^0_i}$-bimodule $\mathcal{O}_{Y_i}$. \end{Lem}
Now we want to realize $Y_i$ a bit differently (we still use $X$ as in the paragraph preceding Lemma \ref{Lem:repel_gener}, and so can write $Y$ instead of $Y_i$ and $X^0$ instead of $X^0_i$). Namely, let $\iota$ denote the inclusion $Y\hookrightarrow X$ and $\pi$ be the projection $Y\rightarrow X^0$. We embed $Y$ into $X\times X^0$ via $(\iota,\pi)$. We equip $X\times X^0$ with the symplectic form $(\omega,-\omega^0)$, where $\omega^0$ is the restriction of $\omega$ to $X^0$. With respect to this symplectic form $Y$ is a lagrangian subvariety. Further, $\mathcal{A}^\theta\widehat{\otimes}\mathsf{C}_\alpha(\mathcal{A}^\theta)^{opp}$ is a quantization of $X\times X^0$ with period $(\lambda,-\lambda^0)$, where $\lambda,\lambda^0$ are periods of $\mathcal{A}^\theta, \mathsf{C}_\alpha(\mathcal{A}^\theta)$.
\begin{Prop}\label{Prop:shift} The period $\lambda^0$ coincides with $\iota^{0*}(\lambda+c_1(K_Y)/2)\in H^2(X^0)=H^2(Y)$, where $K_Y$ denotes the canonical class of $Y$ and $\iota^0$ is the inclusion $X^0\hookrightarrow X$. \end{Prop} \begin{proof} The period of $\mathcal{A}^\theta\widehat{\otimes}\mathsf{C}_\alpha(\mathcal{A}^\theta)^{opp}$ coincides with $p_1(\lambda)-p_2(\lambda^0)$, where $p_1:X\times X^0\rightarrow X, p_2:X\times X^0\rightarrow X^0$ are the projections. So the pull-back of the period to $Y$ is $\iota^*(\lambda)-\pi^*(\lambda^0)$. The structure sheaf of $Y$ admits a quantization to a $\mathcal{A}^\theta\widehat{\otimes}\mathsf{C}_{\alpha}(\mathcal{A}^\theta)^{opp}$-bimodule, By \cite[(1.1.3),Theorem 1.1.4]{BGKP}, we have $\iota^*(\lambda)-\pi^*(\lambda^0)=-\frac{1}{2}c_1(K_Y)$. Restricting this equality to $X^0$, we get the equality required in the proposition.
\end{proof}
\subsection{Gieseker case}\label{SS_Ca_Gies} Now we want to apply Proposition \ref{Prop:shift} to the case when $X=\M^\theta(n,r)$ and $\alpha$ comes from a generic one-dimensional torus in $\GL(w)$ given by $t\mapsto (t^{d_1},\ldots,t^{d_r})$ with $d_1\gg d_2\gg\ldots\gg d_r$. Recall that the fixed point components are parameterized by partitions of $n$. Let $X^0_\mu$ denote the component corresponding to a partition $\mu$ of $n$ and let $Y_\mu$ be its contracting locus. So $Y_\mu\rightarrow X^0_\mu$ is a vector bundle. We will need to describe this vector bundle. The description is a slight ramification of \cite[Proposition 3.13]{Nakajima_tensor}.
First, consider the following situation. Set $V:=\C^n, W=\C^r$. Choose a decomposition $W=W^1\oplus W^2$ with $\dim W^i=r_i$ and consider the one-dimensional torus in $\GL(w)$ acting trivially on $W^2$ and by $t\mapsto t$ on $W^1$. The components of the fixed points in $\M^\theta(n,r)$ are in one-to-one correspondence with decompositions on $n$ into the sum of two parts. Pick such a decomposition $n=n_1+n_2$ and consider the splitting $V=V^1\oplus V^2$ into the sum of two spaces of the corresponding dimensions and let $X^0_1=\M^\theta(n_1,r_1)\times \M^\theta(n_2,r_2)\subset \M^\theta(n,r)^{\alpha(\C^\times)}$ be the corresponding component. We assume that $\theta>0$.
Nakajima has described the contracting bundle $Y_1\rightarrow X^0_1$. This is the bundle on $X^0_1=\M^\theta(n_1,r_1)\times \M^\theta(n_2,r_2)$ that is induced from the $\GL(n_1)\times \GL(n_2)$-module $\ker \beta^{12}/\operatorname{im}\alpha^{12}$, where $\alpha^{12},\beta^{12}$ are certain $\GL(n_1)\times \GL(n_2)$-equivariant linear maps $$\Hom(V^2,V^1)\xrightarrow{\alpha^{12}}\Hom(V^2,V^1)^{\oplus 2}\oplus \operatorname{Hom}(W^2,V^1)\oplus \Hom(V^2,W^1)\xrightarrow{\beta^{12}}\Hom(V^2,V^1) $$ We do not need to know the precise form of the maps $\alpha^{12},\beta^{12}$, what we need is that $\alpha^{12}$ is injective while $\beta^{12}$ is surjective. So $\ker \beta^{12}/\operatorname{im}\alpha^{12}\cong \operatorname{Hom}(W^2,V^1)\oplus \Hom(V^2,W^1)$, an isomorphism of $\GL(n_1)\times \GL(n_2)$-modules.
It is easy to see that if $\alpha':\C^\times \rightarrow \GL(r_2)$ is a homomorphism of the form $t\mapsto \operatorname{diag}(t^{d_1},\ldots,t^{d_k})$ with $d_1,\ldots,d_k\gg 0$, then the contracting bundle for the one-parametric subgroup $(\alpha',1):\C^\times \rightarrow \GL(W^1)\times \GL(W^2)$ coincides with the sum of the contracting bundles for $\alpha'$ and for $(t,1)$. So we get the following result.
\begin{Lem}\label{Lem:contr_Gies} Consider $\alpha:\C^\times\rightarrow \GL(r)$ of the form $t\mapsto \operatorname{diag}(t^{d_1},\ldots,t^{d_r})$ with $d_1\gg d_2\gg \ldots\gg d_r$. Consider the irreducible component of $\M^\theta(n,r)^{\alpha(\C^\times)}$ corresponding to the decomposition $n=n_1+\ldots+n_r$. Then its contracting bundle is induced from the following $\prod_{i=1}^r \GL(n_i)$-module: $\sum_{i=1}^r \left((\C^{n_i})^{\oplus r-i}\oplus (\C^{n_i*})^{\oplus i-1}\right)$. \end{Lem}
For $\mathcal{A}_\lambda^\theta(n_1,\ldots,n_r)$ denote the summand of $\mathsf{C}_\alpha(\mathcal{A}^\theta_\lambda(n,r))$ corresponding to the decomposition $n=n_1+\ldots+n_r$. Let us recall that the value of the period for $\mathcal{A}_\lambda^\theta(n,r)$ is $\lambda+\frac{r}{2}$. Using Lemma \ref{Lem:contr_Gies} and Proposition \ref{Prop:shift}, we deduce the following claim.
\begin{Cor}\label{Cor:shift_Gies} We have $\mathcal{A}_\lambda^\theta(n_1,\ldots,n_r)=\bigotimes_{i=1}^r \mathcal{A}^\theta_{\lambda+(i-1)}(n_i,1)$. \end{Cor}
\subsection{Parabolic induction}\label{SS_parab_induc} Let $X$ be a conical symplectic resolution of $X_0$. We assume that $X$ comes with a Hamiltonian action of a torus $T$ such that $X^T$ is finite. Let $\mathfrak{C}$ stand for $\operatorname{Hom}(\C^\times,T)$. We introduce a pre-order $\prec^\lambda$ on $\mathfrak{C}$ as follows: $\alpha\prec^\lambda \alpha'$ if $\mathcal{A}_\lambda \mathcal{A}_{\lambda,>0,\alpha}\subset \mathcal{A}_\lambda\mathcal{A}_{\lambda,>0,\alpha'}$.
This gives an equivalence relation $\sim^\lambda$ on $\mathfrak{C}$. Both extend naturally to $\mathfrak{C}_{\mathbb{Q}}:=\mathbb{Q}\otimes_{\mathbb{Z}}\mathfrak{C}$.
The following lemma explains why this ordering is important.
\begin{Lem}\label{Lem:parab_ind} Suppose $\alpha\prec\alpha'$. Then $\mathsf{C}_{\alpha'}(\mathsf{C}_\alpha(\mathcal{A}_\lambda))=\mathsf{C}_{\alpha'}(\mathcal{A}_\lambda)$. Further, let $\Delta_{\alpha'}: \mathsf{C}_{\alpha'}(\mathcal{A}_\lambda)\operatorname{-mod}\rightarrow \mathcal{A}_\lambda\operatorname{-mod}, \Delta_{\alpha}: \mathsf{C}_\alpha(\mathcal{A}_\lambda)\operatorname{-mod}\rightarrow \mathcal{A}_\lambda\operatorname{-mod}, \underline{\Delta}:\mathsf{C}_{\alpha'}(\mathcal{A})\operatorname{-mod} \rightarrow \mathsf{C}_\alpha(\mathcal{A}_\lambda)\operatorname{-mod}$ be the Verma module functors. We have $\Delta_{\alpha'}=\Delta_\alpha\circ\underline{\Delta}$. \end{Lem} The proof is straightforward.
The lemma shows that the Verma module functor can be studied in stages. This is what we mean by the parabolic induction.
Our goal now is to describe the pre-order $\prec^\lambda$ for $\lambda$ Zariski generic. We say that $\alpha\prec\alpha'$ if, for each $x\in X^T$, we have $T_xX_{>0,\alpha}\subset T_xX_{>0,\alpha'}$. This automatically implies $T_xX_{\geqslant 0,\alpha}\supset T_xX_{\geqslant 0,\alpha'}$ (via taking the skew-orthogonal complement) and $T_xX_{<0,\alpha}\subset T_xX_{<0,\alpha'}$.
\begin{Prop}\label{Prop:orders} Fix $\alpha,\alpha'$. For $\lambda$ in a Zariski open subset, $\alpha\prec^\lambda\alpha'$ is equivalent $\alpha\prec\alpha'$. \end{Prop} \begin{proof} The proof is in several steps. Suppose $\alpha\prec\alpha'$ and let us check that $\alpha\prec^\lambda\alpha'$.
{\it Step 1}. We need to check that, for a Zariski generic $\lambda$, we have $\mathcal{A}_{\lambda,>0,\alpha}\subset \mathcal{A}_\lambda\mathcal{A}_{\lambda,>0,\alpha'}$ or, equivalently, $\alpha$ has no positive weights on $\mathcal{A}_\lambda/\mathcal{A}_{\lambda}\mathcal{A}_{\lambda,>0,\alpha'}$. This will follow if we check that the $\tilde{\mathcal{A}}$-submodule in $\tilde{\mathcal{A}}/\tilde{\mathcal{A}}\tilde{\mathcal{A}}_{>0,\alpha'}$ generated by the elements of positive weight for $\alpha$ is torsion over $\C[H^2(X)]$ (here, as usual, $\tilde{\mathcal{A}}$ stands for the algebra of global sections of the canonical quantization $\tilde{\mathcal{A}}^\theta$ of $\tilde{X}$). This, in turn, will follow if we prove an analogous statement for $\operatorname{gr}\tilde{\mathcal{A}}/\tilde{\mathcal{A}}\tilde{\mathcal{A}}_{>0,\alpha'}$.
{\it Step 2}. We have an epimorphism $\C[\tilde{X}]/\C[\tilde{X}]\C[\tilde{X}]_{>0,\alpha'}\twoheadrightarrow \operatorname{gr}\tilde{\mathcal{A}}/\tilde{\mathcal{A}}\tilde{\mathcal{A}}_{>0,\alpha'}$. We claim that its kernel is again torsion over $\C[H^2(X)]$, in fact, it is supported on $\mathcal{H}_{\C}$. Consider the $\hbar$-adic completion $\tilde{\mathcal{A}}_\hbar$ of $R_\hbar(\mathcal{A})$. Let $\tilde{\mathcal{A}}_\hbar^{reg}$ denote the (completed) localization of $\tilde{\mathcal{A}}_\hbar$ to $H^2(X)\setminus \mathcal{H}_{\C}$. Then $\tilde{\mathcal{A}}_{\hbar}^{reg}/\tilde{\mathcal{A}}_\hbar^{reg}\tilde{\mathcal{A}}_{\hbar,>0,\alpha'}^{reg}$ coincides with the localization of $\tilde{\mathcal{A}}_{\hbar}/\tilde{\mathcal{A}}_\hbar\tilde{\mathcal{A}}_{\hbar,>0,\alpha'}$. On the other hand, over $H^2(X)\setminus \mathcal{H}_{\C}$, the ideal $\C[\tilde{X}]\C[\tilde{X}]_{>0,\alpha'}$ is a locally complete intersection (given by elements of positive $\alpha'$-weight), compare to the proof of (2) in Proposition \ref{Prop:A0}. As in the proof of (1) of Proposition \ref{Prop:A0}, this implies that $\tilde{\mathcal{A}}_{\hbar}^{reg}/\tilde{\mathcal{A}}_\hbar^{reg}\tilde{\mathcal{A}}_{\hbar,>0,\alpha'}^{reg}$ is flat over $\C[\hbar]$. So the $\hbar$-torsion in $\tilde{\mathcal{A}}_{\hbar}/\tilde{\mathcal{A}}_\hbar\tilde{\mathcal{A}}_{\hbar,>0,\alpha'}$ is supported on $\mathcal{H}_{\C}$. This implies the claim in the beginning of this step.
{\it Step 3}. So we need to check, that under the assumption $\alpha\prec\alpha'$, the submodule in $\C[\tilde{X}]/\C[\tilde{X}]\C[\tilde{X}]_{>0,\alpha'}$ generated by the elements of positive $\alpha$-weight is supported on $\mathcal{H}_{\C}$. This is equivalent to the claim that $\C[X_z]_{>0,\alpha}\subset \C[X_z]\C[X_z]_{>0,\alpha'}$ for $z\not\in \mathcal{H}_{\C}$. Here we write $X_z$ for the fiber of $\tilde{X}\rightarrow H^2(X)$ over $z$. Note that we still have $(T_xX_z)_{>0,\alpha}\subset (T_xX_z)_{>0,\alpha'}$ for all $x\in X_z^{\alpha'(\C^\times)}$. The inclusion $\C[X_z]_{>0,\alpha}\subset \C[X_z]\C[X_z]_{>0,\alpha'}$ now follows from the Luna slice theorem (for $\alpha(\C^\times)\alpha'(\C^\times)$ applied to $\alpha'(\C^\times)$-fixed points, we would like to point out that such points are automatically $\alpha(\C^\times)$-fixed).
The proof of $\alpha\prec\alpha'\Rightarrow \alpha\prec^\lambda\alpha'$ is now complete. We can reverse the argument to see that if $\alpha\prec^\lambda \alpha'$ for Zariski generic $\lambda$, then $\alpha\prec\alpha'$. \end{proof}
The equivalence classes for $\prec$ are cones in $\mathfrak{C}_{\mathbb{Q}}$ and the pre-order is by inclusion of the closures. In particular, there are finitely many equivalence classes. So there is a Zariski open subset where $\prec^\lambda$ refines $\prec$.
Sometimes we will need to determine when $\alpha\prec^\lambda \alpha'$ for a fixed (non Zariski generic) $\lambda$. Pick one-parameter subgroups $\alpha,\beta:\C^\times\rightarrow T$.
\begin{Lem}\label{Lem:prec_spec} For $m\gg 0$, we have $\alpha\prec^\lambda m\alpha+\beta$ for all $\lambda$. \end{Lem} \begin{proof} Clearly, $\alpha\sim^\lambda m\alpha$ for all $m$. The algebra $\operatorname{gr}\mathcal{A}_{\geqslant 0,\alpha}=\C[X_0]_{\geqslant 0,\alpha}$ is finitely generated, as in the proof of \cite[Lemma 3.1.2]{GL}. So we can choose finitely many $T$-semiinvariant generators of the ideal $\C[X_0]_{>0,\alpha}$ in $\C[X_0]_{\geqslant 0,\alpha}$, say $f_1,\ldots,f_k$. Let $\tilde{f}_1,\ldots,\tilde{f}_k$ denote their lifts to $T$-semiinvariant elements in $\mathcal{A}:=\mathcal{A}_\lambda$, these lifts are generators of the ideal $\mathcal{A}_{>0,\alpha}$ in $\mathcal{A}_{\geqslant 0,\alpha}$. Let $a_1,\ldots,a_k>0$ be their weights for $\alpha$ and $b_1,\ldots,b_k$ be their weights for $\beta$. Take $m\in \mathbb{Z}_{>0}$ such that $ma_i+b_i>0$ for all $i$. The elements $\tilde{f}_1,\ldots,\tilde{f}_k$ then lie in $\mathcal{A}_{>0,m\alpha+\beta}$ and so $\mathcal{A}\mathcal{A}_{>0,\alpha}\subset \mathcal{A}\mathcal{A}_{>0,m\alpha+\beta}$. \end{proof}
\section{Finite dimensional representations in the Gieseker case}\label{S_fin_dim} In this section we will prove (1) of Theorem \ref{Thm:fin dim} and Theorem \ref{Thm:cat_O_easy}. First, we prove that the homological duality realizes the Ringel duality of highest weight categories, Subsection \ref{SS_Hom_vs_Ring}.
Then, in Subsection \ref{SS_fin_dim_proof}, we prove part (1) of Theorem \ref{Thm:fin dim}. The ideas of the proof are as follows: we use the Cartan construction to show that we cannot have finite dimensional representations when the denominator is different from $n$ and also that, in the denominator $n$ case, the category $\mathcal{O}$ is not semisimple. Thanks to Subsection \ref{SS_Hom_vs_Ring}, this means that there is a module with support of dimension $<\frac{1}{2}\dim X$ in $\mathcal{O}$ (for the algebra $\bar{\mathcal{A}}_\lambda(n,r)$ with Zariski generic $\lambda$). Using the restriction functors, we see that this module is finite dimensional. Proposition \ref{Prop:CC_inject} then implies that there is a unique finite dimensional module.
Finally, in Subsection \ref{SS_cat_O_thm} we prove Theorem \ref{Thm:cat_O_easy}. The main idea is to recover the category from the homological shifts produced by the Ringel duality.
\subsection{Homological duality vs Ringel duality}\label{SS_Hom_vs_Ring} We start by proving that the homological duality functor $D$ realizes the contravariant Ringel duality on categories $\mathcal{O}$.
Here we deal with the case when $X\rightarrow X_0$ is a conical symplectic resolution (satisfying the additional assumption from Subsection \ref{SS_HC_bimod}). We assume that $X$ comes equipped with a Hamiltonian $\C^\times$-action $\alpha$ that has finitely many fixed points. We choose a period $\hat{\lambda}$ such that \begin{itemize} \item[(i)] $\mathsf{C}_\alpha(\pm \hat{\lambda})\cong \C[X^{\alpha(\C^\times)}]$ the categories $\mathcal{O}(\mathcal{A}_{\hat{\lambda}}), \mathcal{O}(\mathcal{A}_{-\hat{\lambda}})$ are highest weight with standard objects being Verma modules. \item[(ii)] $D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}))\xrightarrow {\sim} D^b_{\mathcal{O}}(\mathcal{A}_{\hat{\lambda}}), D^b(\mathcal{O}(\mathcal{A}_{-\hat{\lambda}}))\xrightarrow {\sim} D^b_{\mathcal{O}}(\mathcal{A}_{-\hat{\lambda}})$. \end{itemize} We recall that these two conditions hold for a Zariski generic $\hat{\lambda}$.
Let us recall the definition of the (contravariant) Ringel duality. Let $\mathcal{C}_1,\mathcal{C}_2$ be two highest weight categories. Suppose we have a contravariant equivalence $R: \mathcal{C}_1^\Delta\xrightarrow{\sim}\mathcal{C}_2^\Delta$ (the superscript $\Delta$ means the full subcategories of standardly filtered objects). Then it restricts to a contravariant duality between $\mathcal{C}_1\operatorname{-proj}$ and $\mathcal{C}_2\operatorname{-tilt}$. The former denotes the category of the projective objects in $\mathcal{C}_1$, while the latter is the category of tilting objects in $\mathcal{C}_2$, i.e., objects that are both standardly and costandardly filtered. The equivalence $R$ extends to an equivalence $D^b(\mathcal{C}_1)\xrightarrow{\sim} D^b(\mathcal{C}_2)^{opp}$. Moreover, the category $\mathcal{C}_2$ gets identified with $\operatorname{End}(T)\operatorname{-mod}$ and, under this identification, the derived equivalence above is $\operatorname{RHom}_{\mathcal{C}_1}(\bullet,T)$. Here $T$ is the tilting generator of $\mathcal{C}_1$, i.e., the direct sum of all indecomposable tiltings. For the proofs of the claims above in this paragraph see \cite[Proposition 4.2]{GGOR}.
We say that $\mathcal{C}_2$ is a Ringel dual of $\mathcal{C}_1$ and write $\mathcal{C}_1^\vee$ for $\mathcal{C}_2$.
\begin{Prop}\label{Prop:contr_Ringel} Take $\hat{\lambda}$ in a Zariski open set and such that the abelian localization holds for $(\hat{\lambda},\theta), (-\hat{\lambda},-\theta)$. Then there is an equivalence $\mathcal{O}(\mathcal{A}_{-\hat{\lambda}})\xrightarrow{\sim} \mathcal{O}(\mathcal{A}_{\hat{\lambda}})^\vee$ that intertwines the homological duality functor $D:D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}))\rightarrow D^b(\mathcal{O}(\mathcal{A}_{-\hat{\lambda}}))^{opp}$ and the contravariant Ringel duality functor $\operatorname{RHom}_{\mathcal{O}(\mathcal{A}_{\hat{\lambda}})}(\bullet,T): D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}}))\rightarrow D^b(\mathcal{O}(\mathcal{A}_{\hat{\lambda}})^\vee)^{opp}$. \end{Prop}
Let $\Delta_{\hat{\lambda}}$ denote the sum of all standard objects in $\mathcal{O}(\mathcal{A}_{\hat{\lambda}})$. Of course, $\Delta_{\hat{\lambda}}=\mathcal{A}_{\hat{\lambda}}/\mathcal{A}_{\hat{\lambda}}\mathcal{A}_{\hat{\lambda},>0}$.
We write $\theta$ for an element of the ample cone of $X$.
\begin{Lem}\label{Lem:dual_hom_vanish} For a parameter $\hat{\lambda}$ in a Zariski open subset, the object $D(\Delta_{\hat{\lambda}}(p))$ is concentrated in homological degree $0$ and, moreover, its characteristic cycle (an element of the vector space with basis formed by the irreducible components of the contracting variety $Y$) coincides with the class of (the degeneration of) the contracting component of $p$ at a generic fiber of $\tilde{X}\rightarrow H^2(X)$.
\end{Lem} \begin{proof} Let us prove the first claim. What we need to prove is that $\operatorname{Ext}^i(\Delta_{\hat{\lambda}},\mathcal{A}_{\hat{\lambda}})=0$ provided $i\neq \frac{1}{2}\dim X$ for $\hat{\lambda}$ in a Zariski open space. Our claim will follow follow if we show that the support of $\operatorname{Ext}^i(\tilde{\Delta}, \widetilde{\mathcal{A}})$ in $H^2(X)$ is not dense in $H^2(X)$ and that the $\C[H^2(X)]$-module $\operatorname{Ext}^i(\tilde{\Delta}, \widetilde{\mathcal{A}})$ is generically flat. Here we write $\widetilde{\Delta}=\widetilde{\mathcal{A}}/\widetilde{\mathcal{A}}\widetilde{\mathcal{A}}_{>0}$.
We can take a graded free resolution of $\operatorname{gr}\widetilde{\Delta}$ and lift it to a free resolution of $\widetilde{\Delta}$. It follows that the right $\widetilde{\mathcal{A}}$-modules $\operatorname{Ext}^i(\widetilde{\Delta}, \widetilde{\mathcal{A}})$ are naturally filtered and that the associated graded modules are subquotients of $\operatorname{Ext}^i(\operatorname{gr}\widetilde{\Delta}, \C[\widetilde{X}])$. The claim about generic flatness follows (compare with \cite[Lemma 5.5, Corollary 5.6]{BL}). Also to prove that claim in the previous paragraph that the support is not dense it is enough to prove a similar claim for $\operatorname{Ext}^i(\operatorname{gr}\widetilde{\Delta}, \C[\widetilde{X}])$.
Set $\widetilde{\Delta}_{cl}:=\C[\widetilde{X}]/\C[\widetilde{X}]\C[\widetilde{X}]_{>0}$. We have $\widetilde{\Delta}_{cl}\twoheadrightarrow \operatorname{gr}\widetilde{\Delta}$. Moreover, the support of the kernel in $H^2(X)$ is contained in $\mathcal{H}_{\C}$, see Step 2 of the proof of Proposition \ref{Prop:orders}. So it is enough to show that the support of $\operatorname{Ext}^i(\widetilde{\Delta}_{cl}, \C[\widetilde{X}])$ is not dense when $i\neq \frac{1}{2}\dim X$. This follows from the observation that, generically over $H^2(X)$, the ideal $\C[\widetilde{X}]\C[\widetilde{X}]_{>0}$ is a locally complete intersection in a smooth variety.
The argument above also implies that the associated graded of $D(\Delta_{\hat{\lambda}}(p))$ coincides with that of $\operatorname{Ext}^{\frac{1}{2}\dim X}(\Delta_{cl,\underline{\lambda}}(p),\C[X_{\underline{\lambda}}])$ for a Zariski generic element $\underline{\lambda}\in H^2(X)$. The latter is just the class of the contracting component $Y_{\underline{\lambda},p}$ (defined as the sum of components of $X\cap \overline{\C^\times Y_{\underline{\lambda},p}}$ with obvious multiplicities).
\end{proof}
\begin{proof}[Proof of Proposition \ref{Prop:contr_Ringel}] We write $\Delta_{\hat{\lambda}}(p)^\vee$ for $D(\Delta_{\hat{\lambda}}(p))$, thanks to Lemma \ref{Lem:dual_hom_vanish}, this is an object in $\mathcal{O}(\mathcal{A}_{-\hat{\lambda}})$ (and not just a complex in its derived category). We have $\operatorname{End}(\Delta_{\hat{\lambda}}(p)^\vee)=\C$ and $\Ext^i(\Delta_{\hat{\lambda}}(p)^\vee, \Delta_{\hat{{\lambda}}}(p')^\vee)=0$ if $i>0$ or $p\leqslant^\theta p'$. We remark that the orders $\leqslant^\theta$ and $\leqslant^{-\theta}$ can be refined to opposite partial orders (first we refine them to the orders coming by the values of the real moment maps for the actions of $\mathbb{S}^1\subset \alpha(\C^\times)$, and then refine those), compare with \cite[5.4]{Gordon}. So it only remains to prove that the characteristic cycle of $\Delta_{\hat{\lambda}}(p)^\vee$ consists of the contracting components $Y_{p'}$ with $p'\leqslant^{-\theta}p$. The characteristic cycle of $\Delta_{\hat{\lambda}}(p)^\vee$ coincides with $\overline{\C^\times Y_{\underline{\lambda},p}}\cap X$, by Lemma \ref{Lem:dual_hom_vanish}. But the characteristic cycle of $\Delta_{-\hat{\lambda}}(p)$ is the same. Our claim follows. \end{proof}
\begin{Rem} We also have covariant Ringel duality given by $\operatorname{RHom}(T,\bullet)$, it maps costandard objects to standard ones. Under the assumption that the conical symplectic resolutions of $X_0$ are strictly semismall, Propositions \ref{Prop:contr_Ringel} and \ref{Prop:WC_vs_D} imply that the long wall-crossing functor is inverse of the covariant Ringel duality. This proves a part of \cite[Conjecture 8.27]{BLPW}. \end{Rem}
\subsection{Proof of Theorem \ref{Thm:fin dim}}\label{SS_fin_dim_proof} Here we prove (1) of Theorem \ref{Thm:fin dim}. The proof is in several steps.
{\it Step 1}. Let us establish a criterium for the semisimplicity of a highest weight category via the Ringel duality.
\begin{Lem}\label{Lem:hw_techn} Let $\mathcal{C}$ be a highest weight category and $R:D^b(\mathcal{C})\rightarrow D^b(\mathcal{C}^{\vee})^{opp}$ denote the contravariant Ringel duality. The following conditions are equivalent: \begin{enumerate} \item $\mathcal{C}$ is semisimple. \item We have $H^0(R(L))\neq 0$ for every simple object $L$. \item every simple lies in the socle of a standard object. \end{enumerate} \end{Lem} \begin{proof} The implication (1)$\Rightarrow$(2) is clear. The implication (2)$\Rightarrow$(3) follows from the fact that every standard object in a highest weight category is included into an indecomposable tilting.
Let us prove (3)$\Rightarrow$(1). Let $\lambda$ be a maximal (with respect to the coarsest highest weight ordering) label. Then the simple $L(\lambda)$ lies in the socle of some standard, say $\Delta(\mu)$. But all simple constituents of $\Delta(\mu)$ are $L(\nu)$ with $\nu\leqslant \mu$. It follows that $\mu=\lambda$. Since $L(\lambda)$ lies in the socle of $\Delta(\lambda)$ and also coincides with the head, we see that $\Delta(\lambda)=L(\lambda)$. So $L(\lambda)$ is projective and therefore spans a block in the category. Since this holds for any maximal $\lambda$, we deduce that the category $\mathcal{C}$ is semisimple. \end{proof}
Let us remark that for the category $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ condition (2) is equivalent to every simple having support of dimension $rn-1$. This follows from Subsection \ref{SS_dual_WC} and Proposition \ref{Prop:contr_Ringel}.
Below in this proof we assume that $\lambda$ is chosen as in Proposition \ref{Prop:contr_Ringel}, in particular, the categories $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ and $\mathcal{O}(\bar{\mathcal{A}}^\theta_\lambda(n,r))$ are equivalent. In the definition of categories $\mathcal{O}$ we choose the torus of the form $(\alpha,1)$, where $\alpha:\C^\times\rightarrow \GL(r)$ is given by $t\mapsto (t^{d_1},\ldots,t^{d_r})$ with $d_1\gg d_2\gg\ldots\gg d_r$.
{\it Step 2}. Let us prove that the category $\mathcal{O}(\bar{\mathcal{A}}^\theta_{\lambda}(n,r))$ is semisimple, when $\lambda\not\in \mathbb{Q}$ or the denominator of $\lambda$ is bigger than $n$. The proof is by induction on $n$ (for $n=0$ the claim is vacuous).
By Corollary \ref{Cor:full_supp}, we see that a simple $\bar{\mathcal{A}}_\lambda(n,r)$-module whose support has dimension $<rn-1$ is annihilated by a proper ideal of $\bar{\mathcal{A}}_\lambda(n,r)$. We claim that any such ideal has finite codimension under our assumption on $\lambda$. Indeed, otherwise some proper slice algebra has an ideal of finite codimension, see Proposition \ref{Prop:dagger_prop}, which contradicts our inductive assumption. So the support of a simple has dimension either $rn-1$ or $0$.
If we know that all simple modules have support of dimension $rn-1$, we are done. But thanks to Corollary \ref{Cor:shift_Gies}, Proposition \ref{Prop:A0_descr} and known results on finite dimensional $\bar{\mathcal{A}}_\lambda(n,1)$-modules, \cite{BEG}, we see that $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$ has no finite dimensional modules (we obviously have $\mathsf{C}_\alpha(\mathcal{A}_\lambda(n,r))=D(\C)\otimes \mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$ and none of the summands of $\mathcal{A}_\lambda(n,r)^0$ has simple of GK dimension $1$ in category $\mathcal{O}$).
{\it Step 3}. The description of $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$ shows that there are no finite dimensional $\bar{\mathcal{A}}_\lambda(n,r)$-modules in the case when the denominator of $\lambda$ is less than $n$.
{\it Step 4}. Now consider the case of denominator $n$. Similarly to Step 2, all simples are either finite dimensional or have support of dimension $rn-1$. By Lemma \ref{Lem:top_cohom}, the dimension of the middle homology of $\bar{\M}^\theta(n,r)$ is $1$. Thanks to Proposition \ref{Prop:CC_inject}, the number of finite dimensional irreducibles is 0 or $1$. If there is one such module, then the category of finite dimensional modules is semisimple because $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ is a highest weight category. Thanks to Step 1, we only need to show that $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ is not semisimple.
One-parameter subgroups $\alpha:t\mapsto \operatorname{diag}(t^{d_1},\ldots, t^{d_r})$ with $d_1\gg\ldots \gg d_r$ form one equivalence class for the pre-order $\prec$. This cone is a face of the equivalence class containing $(\alpha,1)$. Proposition \ref{Prop:orders} implies that $\alpha\prec^\lambda (\alpha,1)$. Now we can use Lemma \ref{Lem:parab_ind}.
Let us write $\Delta^0,\underline{\Delta}$ for the Verma module functors $\Delta^0:\C[\mathsf{P}_r(n)]\rightarrow \mathcal{O}(\mathsf{C}_\alpha(\mathcal{A}_\lambda(n,r)))$ and $\underline{\Delta}:\mathcal{O}(\mathsf{C}_\alpha(\mathcal{A}_\lambda(n,r)))\rightarrow \mathcal{O}(\mathcal{A}_\lambda(n,r))$, here we write $\mathsf{P}_r(n)$ for the set of the $r$-multipartitions of $n$. By Lemma \ref{Lem:parab_ind}, we have $\Delta=\underline{\Delta}\circ \Delta^0$. The category $\mathcal{O}(\mathsf{C}_\alpha(\mathcal{A}_\lambda(n,r)))$ is not semisimple: there is a nonzero homomorphism $\varphi:\Delta^0(p_2)\rightarrow \Delta^0(p_1)$, where $p_1=(\varnothing^{r-1}, (n)), p_2=(\varnothing^{r-1}, (n-1,1))$. So we get a homomorphism $\underline{\Delta}(\varphi):\Delta(p_2)=\underline{\Delta}(\Delta^0(p_2)) \rightarrow \underline{\Delta}(\Delta^0(p_1))=\Delta(p_1)$. The highest $\alpha$-weight components of $\Delta(p_2),\Delta(p_1)$ coincide with $\Delta^0(p_2), \Delta^0(p_1)$, respectively, by the construction. The homomorphism $\Delta^0(p_2)\rightarrow \Delta^0(p_1)$ induced by $\underline{\Delta}(\varphi)$ coincides with $\varphi$. It follows that $\underline{\Delta}(\varphi)\neq 0$. We conclude that $\mathcal{O}(\mathcal{A}_\lambda(n,r))$ is not semisimple.
This completes the proof of all claims of the theorem but the claim that the category of modules supported on $\rho^{-1}(0)$ is semisimple. The latter is an easy consequence of the observation that, in a highest weight category, we have $\operatorname{Ext}^1(L,L)=0$. We would like to point out that the argument of the previous paragraph generalizes to the denominators less than $n$. So in those cases there are also simple $\bar{\mathcal{A}}_\lambda(n,r)$-modules of support with dimension $<rn-1$.
\subsection{Proof of Theorem \ref{Thm:cat_O_easy}}\label{SS_cat_O_thm} In this subsection we will prove Theorem \ref{Thm:cat_O_easy}. We have already seen in the previous subsection that if the denominator is bigger than $n$, then the category $\mathcal{O}$ is semisimple. The case of denominator $n$ will follow from a more precise statement, Theorem \ref{Thm:catO_str}.
Let us introduce a certain model category. Let $\mathcal{C}_n$ denote the nontrivial block for the category $\mathcal{O}$ for the Rational Cherednik algebra $\mathcal{H}_{1/n}(n)$ for the symmetric group $\mathfrak{S}_n$. Let us summarize some properties of this category. \begin{itemize} \item[(i)] Its coarsest highest weight poset is linearly ordered: $p_n<p_{n-1}<\ldots<p_1$. \item[(ii)] The objects $I(p_i)$ for $i>1$ are universal extensions $0\rightarrow \nabla(p_{i})\rightarrow I(p_i)\rightarrow \nabla(p_{i-1})\rightarrow 0$. Here we write $\nabla(p_i),I(p_i)$ for the costandard and the indecomposable injective objects of $\mathcal{C}_n$ labeled by $p_i$. \item[(iii)] The indecomposable tilting objects $T(p_{i-1})$ for $i>1$ coincide with $I(p_i)$. \item[(iv)] The simple objects $L(p_i)$ with $i>1$ appear in the socles of tiltings, while $\operatorname{RHom}_{\mathcal{C}_n}(L(p_1),T)$ is concentrated in homological degree $n$. \item[(v)] There is a unique simple in $\mathcal{C}_n^\vee$ that appears in the higher cohomology of $\operatorname{RHom}_{\mathcal{C}_n}(\bullet,T)$. \end{itemize}
\begin{Thm}\label{Thm:catO_str} Consider a parameter of the form $\lambda=\frac{q}{n}$ with coprime $q,n$. Then the following is true. \begin{enumerate} \item The category $\mathcal{O}(\bar{\mathcal{A}}_\lambda^\theta(n,r))$ has only one nontrivial block that is equivalent to $\mathcal{C}_{rn}$. This block contains an irreducible representation supported on $\bar{\rho}^{-1}(0)$. \item Suppose the one parameter torus used to define the category $\mathcal{O}$ is of the form $t\mapsto (\alpha(t),t)$, where $\alpha(t)=\operatorname{diag}(t^{d_1},\ldots, t^{d_r})$ with $d_i-d_{i+1}>n$ for all $i$. Then the labels in the non-trivial block of $\mathcal{O}(\bar{\mathcal{A}}_\lambda^\theta(n,r))$ are hooks $h_{i,d}=(\varnothing,\ldots, (n+1-d, 1^{d-1}),\ldots, \varnothing)$ (where $i$ is the number of the diagram where the hook appears) ordered by $h_{1,n}>h_{1,n-1}>\ldots>h_{1,1}>h_{2,n}>\ldots>h_{2,1}>\ldots>h_{r,1}$. \end{enumerate} \end{Thm} \begin{proof} The proof is in several steps. We again deal with the realization of our category as $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$, where $\lambda$ is Zariski generic and such that $(\lambda,\theta)$ satisfies the abelian localization.
{\it Step 1}. As we have seen in Step 4 of the proof of Theorem \ref{Thm:fin dim}, all simples have maximal dimension of support, except one, let us denote it by $L$, which is finite dimensional. So all blocks but one consist of modules with support of maximal dimension. Now arguing as in the first two steps of the proof of Theorem \ref{Thm:fin dim}, we see that the blocks that do not contain $L$ are simple. Let $\mathcal{C}$ denote the nontrivial block. The label of $L$, denote it by $p_{max}$, is the largest in any highest weight ordering. For all other labels $p$ the simple $L(p)$ lies in the socle of the tilting generator $T$. In other words an analog of (iv) above holds for $\mathcal{C}$ with $rn$ instead of $n$. In the subsequent steps we will show that $\mathcal{C}\cong \mathcal{C}_{rn}$.
{\it Step 2}. Let us show that an analog of (v) holds for $\mathcal{C}$. By Corollary \ref{Cor:D_higher_codim}, the higher cohomology of $D(L)$ cannot have support of maximal dimension. It follows that the higher cohomology is finite dimensional and so are direct sums of a single simple in $\mathcal{O}(\bar{\mathcal{A}}_{-r-\lambda}(n,r))$. Since the Ringel duality is the same as the homological duality (up to an equivalence of abelian categories, see Proposition \ref{Prop:contr_Ringel}), we are done.
{\it Step 3}. Let us show that there is a unique minimal label for $\mathcal{C}$, say $p_{min}$. This is equivalent to $\mathcal{C}^\vee$ having a unique maximal label because the orders on $\mathcal{C}$ and $\mathcal{C}^\vee$ are opposite. But $\mathcal{C}^\vee$ is equivalent to the nontrivial block in $\mathcal{O}(\bar{\mathcal{A}}_{-r-\lambda}(n,r))$. So we are done by Step 1 (applied to $-r-\lambda$ instead of $\lambda$) of this proof.
{\it Step 4}. Let us show that (v) implies that any tilting in $\mathcal{C}$ but one is injective. Let $R^\vee$ denote the Ringel duality equivalence $D^b(\mathcal{C}^\vee)\rightarrow D^b(\mathcal{C})^{opp}$. Let us label the tiltings by the label of the top costandard in a filtration with costandard subsequent quotients. We have $\Ext^i(L(p'),T(p))=\Hom(L(p')[i],T(p))=\Hom((R^\vee)^{-1}T(p)[i], (R^\vee)^{-1}L(p'))$. The objects $(R^{\vee})^{-1}T(p)$ are projective so $\Ext^i(L(p'),T(p))=\Hom((R^\vee)^{-1} T(p), H^i((R^\vee)^{-1} L(p')))$. Similarly to the previous step (applied to $\mathcal{C}^\vee$ instead of $\mathcal{C}$ and $(R^{\vee})^{-1}$ instead of $R$), there is a unique indecomposable projective $P^\vee(p^\vee)$ in $\mathcal{C}^\vee$ that can map nontrivially to a higher homology of $(R^{\vee})^{-1} L(p)$. So if $(R^\vee)^{-1} T(p)\neq P^\vee(p^\vee)$, then $T(p)$ is injective.
{\it Step 5}. We remark that $\nabla(p_{max})$ is injective but not tilting, while $\nabla(p_{min})$ is tilting but not injective. So the injectives in $\mathcal{C}$ are $\nabla(p_{max})$ and $T(p)$ for $p\neq p_{min}$. Similarly, the tiltings are $I(p), p\neq p_{max}$, and $\nabla(p_{min})$.
{\it Step 6}. Let $\Lambda$ denote the highest weight poset for $\mathcal{C}$. Let us define a map $\nu: \Lambda\setminus \{p_{min}\}\rightarrow \Lambda\setminus \{p_{max}\}$. It follows from Step 5 that the socle of any tilting in $\mathcal{C}$ is simple. By definition, $\nu(p)$ is such that $L(\nu(p))$ is the socle of $T(p)$. We remark that $\nu(p)\leqslant p$ for any highest weight order.
{\it Step 7}. Let us show that any element $p\in \Lambda$ has the form $\nu^i(p_{max})$. Assume the converse and let us pick the maximal element not of this form, say $p'$. Since $p'\neq p_{max}$, we see that $L(p')$ lies in the socle of some tilting. But the socle of any indecomposable tilting is simple. So $\nabla(p')$ is a bottom term of a filtration with constandard subsequent quotients. By the definition of $\nu$ and the choice of $p'$, $\nabla(p')$ is tilting itself. Any indecomposable tilting but $\nabla(p_{min})$ is injective and we cannot have a costandard that is injective and tilting simultaneously. So $p'=p_{min}$. But let us pick a minimal element $p''$ in $\Lambda\setminus \{p_{min}\}$. By above in this step, $\nu(p'')<p''$. So $\nu(p'')=p_{min}$. The claim in the beginning of the step is established. This proves (i) for $\mathcal{C}$.
{\it Step 8}. (ii) for $\mathcal{C}$ follows from Step 7 and (iii) follows from (ii) and Step 5.
{\it Step 9}. Let us show that $\#\Lambda=rn$. The minimal injective resolution for $\nabla(p_{min})$ has length $\#\Lambda$, all injectives there are different, and the last term is $\nabla(p_{max})$. It follows that $\operatorname{RHom}(L(p_{max}),\nabla(p_{min}))$ is concentrated in homological degree $\#\Lambda-1$. The other tiltings are injectives and $\operatorname{RHom}$'s with them amount to $\Hom$'s. Since $\operatorname{RHom}(L(p_{max}),T)$ is concentrated in homological degree $rn-1$ (because of the coincidence of the Ringel and the homological dualities), we are done.
{\it Step 10}. Let us complete the proof of (1). Let us order the labels in $\Lambda$ decreasingly, $p_1>\ldots>p_{rn}$. Using (ii) we get the following claims. \begin{itemize} \item $\operatorname{End}(I(p_i))=\C[x]/(x^2)$ for $i>1$ and $\operatorname{End}(I(p_1))=\C$. \item $\operatorname{Hom}(I(p_i),I(p_j))$ is 1-dimensional if
$|i-j|=1$ and is $0$ if $|i-j|>1$. \end{itemize} Choose some basis elements $a_{i,i+1}, i=1,\ldots,rn-1$ in $\operatorname{Hom}(I(p_i), I(p_{i+1}))$ and also basis elements $a_{i+1,i}\in \operatorname{Hom}(I(p_{i+1}),I(p_i))$. We remark that the image of the composition map $\Hom(I(p_i),I(p_{i+1}))\times \Hom(I(p_{i+1}),I(p_i)) \rightarrow \operatorname{End}(I(p_i))$ spans the maximal ideal. Choose generators $a_{ii}$ in the maximal ideals of $\End(I(p_i)), i=2,\ldots,rn$. Normalize $a_{21}$ by requiring that $a_{21}a_{12}=a_{22}$, automatically, $a_{12}a_{21}=0$. Normalize $a_{32}$ by $a_{23}a_{32}=a_{22}$ and then normalize $a_{33}$ by $a_{33}=a_{32}a_{23}$. We continue normalizing $a_{i+1,i}$ and $a_{i+1,i+1}$ in this way. We then recover the multiplication table in $\operatorname{End}(\bigoplus I(\lambda_i))$ in a unique way. This completes the proof of (1).
{\it Step 11}. Now let us prove (2). Let us check that the labeling set $\Lambda$ for the nontrivial block of $\mathcal{O}(\bar{\mathcal{A}}^\theta_\lambda(n,r))$ consists of hooks. For this, it is enough to check that $\Delta(h_{i,d})$ does not form a block. This in turn, will follow if we check that there is a nontrivial homomorphism between $\Delta(h_{i,d})$ and some other $\Delta(h_{i,d'})$. This is done similarly to the second paragraph of Step 4 in the proof of Theorem \ref{Thm:fin dim}. Now, according to \cite{Korb}, the hooks are ordered as specified in (2) with respect to the geometric order on the torus fixed points in $\M^\theta(n,r)$ (note that the sign conventions here and in \cite{Korb} are different).
\end{proof}
\begin{Rem}\label{Rem:simpl_label} We can determine the label of the simple supported on $\bar{\rho}^{-1}(0)$ in the category $\mathcal{O}$ corresponding to an arbitrary generic torus. Namely, note that $\bar{\rho}^{-1}(0)$ coincides with the closure of a single contracting component and that contracting component corresponds to the maximal point. Now we can use results of \cite{Korb} to find a label of the point: it always has only one nontrivial partition and this partition is either $(n)$ or $(1^n)$. \end{Rem}
\section{Localization theorems in the Gieseker case}\label{S_loc} In this section we prove Theorem \ref{Thm:loc}. The proof is in the following steps. \begin{itemize} \item We apply results of McGerty and Nevins, \cite{MN_ab}, to show that, first, if the abelian localization fails for $(\lambda,\theta)$, then $\lambda$ is a rational number with denominator not exceeding $n$, and, second, the parameters $\lambda=\frac{q}{m}$ with $m\leqslant n$ and $-r<\lambda<0$ are indeed singular and the functor $\Gamma_\lambda^\theta$ is exact when $\lambda>-r, \theta>0$ or $\lambda<0,\theta<0$. Thanks to an isomorphism $\mathcal{A}_{\lambda}^\theta(n,r)\cong \mathcal{A}_{-\lambda-r}^{-\theta}(n,r)$, this reduces the conjecture to checking that the abelian localization holds for $\lambda=\frac{q}{m}$ with $q\geqslant 0, m\leqslant n$. \item Then we reduce the proof to the case when the denominator is precisely $n$ and $\lambda,\theta>0$. \item Then we will study a connection between the algebras $\mathsf{C}_\alpha(\bar{\mathcal{A}}_{\lambda}(n,r)), \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_{\lambda}(n,r)))$. We will show that the numbers of simples in the categories $\mathcal{O}$ for these algebras coincide. We deduce the localization theorem from there. \end{itemize}
The last step is a crucial one and it does not generalize to other quiver varieties.
\subsection{Results of McGerty and Nevins and consequences}\label{SS_MN_appl}
In \cite{MN_ab}, McGerty and Nevins found a sufficient condition for the functor $\Gamma_\lambda^\theta:\mathcal{A}_\lambda^\theta(n,r)\operatorname{-mod}\rightarrow \mathcal{A}_\lambda(n,r)\operatorname{-mod}$ to be exact (they were dealing with more general Hamiltonian reductions but we will only need the Gieseker case). Let us explain what their result give in the case of interest for us. Consider the quotient functors $\pi_\lambda: D_R\operatorname{-mod}^{G,\lambda}\twoheadrightarrow \mathcal{A}_\lambda(n,r)\operatorname{-mod}$ and $\pi_\lambda^\theta:D_R\operatorname{-mod}^{G,\lambda}\twoheadrightarrow \mathcal{A}_\lambda^\theta(n,r)\operatorname{-mod}$.
\begin{Prop}\label{Prop:MN} The inclusion $\ker \pi_{\lambda}^{\det}\subset \ker \pi_\lambda$ holds provided $\lambda>-r$. Similarly, $\ker\pi_{\lambda}^{\det^{-1}} \subset \pi_{\lambda}$ provided $\lambda< 0$. \end{Prop}
I would like to thank Dmitry Korb for explaining me the required modifications to \cite[Section 8]{MN_ab}.
\begin{proof} We will consider the case $\theta=\det$, the opposite case follows from $\mathcal{A}_\lambda^{-\theta}(n,r)\cong \mathcal{A}_{-r-\lambda}^\theta(n,r)$. The proof closely follows \cite[Section 8]{MN_ab}, where the case of $r=1$ is considered. Instead of $R=\operatorname{End}(V)\oplus \operatorname{Hom}(V,W)$ they use $R'=\operatorname{End}(V)\oplus \operatorname{Hom}(W,V)$, then, thanks to the partial Fourier transform, we have $D(R)\operatorname{-mod}^{G,\lambda}\cong D(R')\operatorname{-mod}^{G,\lambda+r}$. The set of weights in $R'$ for a maximal torus $T\subset \GL(V)$ is independent of $r$ so we have the same Kempf-Ness subgroups as in the case $r=1$: it is enough to consider the subgroups $\beta$ with tangent vectors (in the notation of \cite[Section 8]{MN_ab}) $e_1+\ldots+e_k$. The shift in {\it loc.cit.} becomes $\frac{rk}{2}$ (in the computation of {\it loc.cit.} we need to take the second summand $r$ times, that is all that changes). So we get that $\ker \pi_{\lambda}^{\det}\subset \ker \pi_\lambda$ provided $k(-\frac{r}{2}-\lambda)\not\in \frac{rk}{2}+\mathbb{Z}_{\geqslant 0}$ for all possible $k$ meaning $1\leqslant k\leqslant n$ (the number $-\frac{r}{2}-\lambda$ is $c'$ in {\it loc.cit.}). The condition simplifies to $\lambda\not\in -r-\frac{1}{k}\mathbb{Z}_{\geqslant 0}$. This implies the claim of the proposition. \end{proof}
\subsection{Reduction to denominator $n$ and singular parameters}\label{SS_loc_red_to_n} Proposition \ref{Prop:MN} allows us to show that certain parameters are singular. \begin{Cor}\label{Cor:sing} The parameters $\lambda$ with denominator $\leqslant n$ and $-r<\lambda<0$ are singular. \end{Cor} \begin{proof} Assume the converse. Since $R\Gamma_\lambda^{\pm \theta}$ are equivalences and $\Gamma_\lambda^{\pm \theta}$ are exact, we see that $\Gamma_{\lambda}^{\pm \theta}$ are equivalences of abelian categories. From the inclusions $\ker \pi_{\lambda}^{\pm \theta}\subset \ker\pi_\lambda$, we deduce that the functors $\pi_{\lambda}^{\pm \theta}$ are isomorphic. So the wall-crossing functor $\mathfrak{WC}_{\lambda\rightarrow \lambda^-}=\pi_{\lambda^-}^{-\theta}\circ (\C_{\lambda^--\lambda}\otimes\bullet)\circ L\pi_{\lambda}^{\theta*}$ (see \cite[(2.8)]{BL} for the equality) is an equivalence of abelian categories (where we modify $\lambda$ by adding a sufficiently large integer). However, we have already seen that it does shift some modules, since not all modules in $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ have support of maximal dimension (see the end of the proof of Theorem \ref{Thm:fin dim}). \end{proof}
Now let us observe that it is enough to check that the abelian localization holds for $\lambda\geqslant 0$ and $\theta>0$. This follows from an isomorphism $\mathcal{A}_{\lambda}^\theta(n,r)\cong \mathcal{A}_{-\lambda-r}^{-\theta}(n,r)$. This an isomorphism of sheaves on $\M^\theta(n,r)\cong \M^{-\theta}(n,r)$ (see the proof of Lemma \ref{Lem:iso}).
Now let us reduce the proof of Theorem \ref{Thm:loc} to the case when $\lambda$ has denominator $n$. Let the denominator $n'$ be less then $n$. As we have seen in \cite[Section 5]{BL}, the abelian localization holds for $(\lambda,\theta>0)$ if and only if the bimodules $\mathcal{A}^0_{\lambda,\chi}(n,r):=[D(R)/D(R)\{x_R-\langle\lambda,x\rangle\}]^{G,\chi}, \mathcal{A}^0_{\lambda+\chi,-\chi}(n,r)$ with
$\chi\gg 0$ define mutually dual Morita equivalences, equivalently, the natural homomorphisms \begin{equation}\label{eq:nat_homs} \begin{split}&\mathcal{A}^0_{\lambda,\chi}(n,r)\otimes_{\mathcal{A}_{\lambda}(n,r)} \mathcal{A}^0_{\lambda+\chi,-\chi}(n,r)\rightarrow \mathcal{A}_{\lambda+\chi}(n,r),\\ &\mathcal{A}^0_{\lambda+\chi,-\chi}(n,r)\otimes_{\mathcal{A}_{\lambda+\chi}(n,r)}\mathcal{A}^0_{\lambda,\chi}(n,r)\rightarrow \mathcal{A}_{\lambda}(n,r) \end{split} \end{equation} are isomorphisms.
Assume the converse. Let $K^1,C^1,K^2,C^2$ denote the kernel and the cokernel of the first and of the second homomorphism, respectively. If one of these bimodules is nontrivial, then we can find $x\in \M(n,r)$ such that $K^i_{\dagger,x},C^i_{\dagger,x}$ are finite dimensional, and, at least one of these bimodules is nonzero. From the classification of finite dimensional irreducibles, we see that the slice algebras must be of the form $\bar{\mathcal{A}}_{?}(n',r)^{\otimes k}$. But then $\mathcal{A}^0_{\lambda+\chi,-\chi}(n,r)_{\dagger,x}= \bar{\mathcal{A}}^0_{\lambda+\chi,-\chi}(n',r)^{\otimes k}, \mathcal{A}^0_{\lambda,\chi}(n,r)_{\dagger,x}= \bar{\mathcal{A}}^0_{\lambda,\chi}(n',r)^{\otimes k}$. Further, applying $\bullet_{\dagger,x}$ to (\ref{eq:nat_homs}) we again get natural homomorphisms. But the localization theorem holds for the algebra $\bar{\mathcal{A}}_{\lambda}(n',r)$ thanks to our inductive assumption, so the homomorphisms of the $\bar{\mathcal{A}}_\lambda(n',r)^{\otimes k}$-bimodules are isomorphisms. This contradiction justifies the reduction to denominator $n$.
\subsection{Number of simples in $\mathcal{O}(\mathcal{A}_\lambda(n,r))$}\label{SS_loc_simpl_numb} So we need to prove that the localization theorem holds for positive parameters $\lambda$ with denominator $n$ (the case $\lambda=0$ occurs only if $n=1$ and in that case this is a classical localization theorem for differential operators on projective spaces). We will derive the proof from the claim that the number of simple objects in the categories $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ and $\mathcal{O}(\bar{\mathcal{A}}_\lambda^\theta(n,r))$ is the same. For this we will need to study the natural homomorphism $\varphi:\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))\rightarrow \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))$. Here, as before, $\alpha:\C^\times \rightarrow \GL(r)$ is of the form $t\mapsto (t^{d_1},\ldots,t^{d_r})$, where $d_1\gg d_2\gg\ldots\gg d_r$.
Recall that $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))=\bigoplus \bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r)$, where the summation is taken over all compositions $n=n_1+\ldots+n_r$ and $\bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r)\otimes D(\C)=\bigotimes_{i=1}^r \mathcal{A}_{\lambda+i-1}(n_i,1)$ (the factor $D(\C)$ is embedded into the right hand side ``diagonally''). Let $\mathcal{B}$ denote the maximal finite dimensional quotient of $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))$.
\begin{Prop}\label{Prop:surject} The composition of $\varphi$ with the projection $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))\twoheadrightarrow \mathcal{B}$ is surjective. \end{Prop} \begin{proof} The proof is in several steps.
{\it Step 1}. We claim that it is sufficient to prove that the composition $\varphi_i$ of $\varphi$ with the projection $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$ is surjective. Indeed, each $\bar{\mathcal{A}}_{\lambda+i}(n,1), i=0,\ldots,r-1$ has a unique finite dimensional representation. The dimensions of these representations are pairwise different, see \cite{BEG1}. Namely, if $\lambda=\frac{q}{n}$, then the dimension is $\frac{(q+n-1)!}{q!n!}$. So $\mathcal{B}$ is the sum of $r$ pairwise non-isomorphic matrix algebras. Therefore the surjectivity of the homomorphism $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))\rightarrow \mathcal{B}$ follows from the surjectivity of all its $r$ components. We remark that the other summands of $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$ have no finite dimensional representations.
{\it Step 2}. Generators of $\bar{A}_{\lambda+i}(n,1)$ are known. Namely, recall that $\bar{A}_{\lambda+i}(n,1)$ is the spherical subalgebra in the Cherednik algebra $H_c(n)$ for the reflection representation $\mathfrak{h}$ of $\mathfrak{S}_n$ with $c=\lambda+i$. The latter is generated by $\mathfrak{h},\mathfrak{h}^*$. Then algebra $eH_c(n)e$ is generated by $S(\mathfrak{h})^W,S(\mathfrak{h}^*)^W$, see \cite{EG}. On the level of quantum Hamiltonian reduction, $S(\mathfrak{h})^W$ coincides with the image of $S(\mathfrak{g})^G$, while $S(\mathfrak{h}^*)^W$ coincides with the image of $S(\mathfrak{g}^*)^G$. Here we write $\mathfrak{g}$ for $\mathfrak{sl}_n$. We will show that these images lie in the image of $\varphi_i:\mathsf{C}_\alpha(\bar{\mathcal{A}}_{\lambda}(n,r))\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$, this will establish the surjectivity in Step 1.
{\it Step 3}. Let us produce a natural homomorphism $S(\mathfrak{g}^*)^G\rightarrow \mathsf{C}_\alpha(\bar{\mathcal{A}}_{\lambda}(n,r))$. First of all, recall that $\bar{\mathcal{A}}_{\lambda}(n,r)$ is a quotient of $D(\mathfrak{g}\oplus (\C^{*n})^r)^{G}$. The algebra $S(\mathfrak{g}^*)^G$ is included into $D(\mathfrak{g}\oplus (\C^{*n})^{\oplus r})^G$ as the algebra of invariant functions on $\mathfrak{g}$. So we get a homomorphism $S(\mathfrak{g})^G\rightarrow \bar{\mathcal{A}}_{\lambda}(n,r)$. Since the $\C^\times$-action $\alpha$ used to form $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$ is nontrivial only on $(\C^{*n})^{\oplus r}$, we see that the image of $S(\mathfrak{g}^*)^G$ lies in $\bar{\mathcal{A}}_{\lambda}(n,r)^{\alpha(\C^\times)}$. So we get a homomorphism $\iota: S(\mathfrak{g}^*)^G\rightarrow \mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$.
{\it Step 4}. We claim that $\varphi_i\circ \iota$ coincides with the inclusion $S(\mathfrak{g}^*)^G\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$. We can filter the algebra $D(\mathfrak{g}\oplus (\C^{*n})^{\oplus r})$ by the order of a differential operator. This induces filtrations on $\bar{\mathcal{A}}_{\lambda}(n,r),\bar{\mathcal{A}}^\theta_\lambda(n,r)$. We have similar filtrations on the algebras $\bar{\mathcal{A}}_{\lambda+i}(n,1)$. The filtrations on $\bar{\mathcal{A}}_\lambda(n,r),\bar{\mathcal{A}}_\lambda^\theta(n,r)$ are preserved by $\alpha$ and hence we have filtrations on $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)),\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))$. It is clear from the construction of the projection $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$ that it is compatible with the filtration. On the other hand, the images of $S(\mathfrak{g}^*)^G$ in both $\mathsf{C}_\alpha(\bar{\mathcal{A}}_{\lambda}(n,r)),\bar{\mathcal{A}}_{\lambda+i}(n,1)$ lies in the filtration degree 0. So it is enough to prove the coincidence of the homomorphisms in the beginning of the step after passing to associate graded algebras.
{\it Step 5}. The associated graded homomorphisms coincide with analogous homomorphisms defined on the classical level. Recall that the components of $\M^\theta(n,r)^{\alpha(\C^\times)}$ that are Hilbert schemes are realized as follows. Pick an eigenbasis $w_1,\ldots,w_r$ for the fixed $r$-dimensional torus in $\operatorname{GL}_r$. Then the $i$th component that is the Hilbert schemes consists of $G$-orbits of $(A,B,0,j)$, where $j:\C^n\rightarrow \C^r$ is a map with image in $\C w_j$. In particular, the homomorphism $S(\mathfrak{g}^*)^G\rightarrow \operatorname{gr} \mathcal{A}_{\lambda+i}(n,1)$ is dual to the morphism given by $(A,B,0,j)\rightarrow A$.
On the other hand, the component of $\M^\theta(n,r)^{\alpha(\C^\times)}$ in consideration maps onto $\M(r,n)\quo \alpha(\C^\times)$ (via sending the orbit of $(A,B,0,j)$ to the orbit of the same element). The corresponding homomorphism of algebras is the associated graded of $\bar{\mathcal{A}}_\lambda(n,r)^{\alpha(\C^\times)}\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$. Then we have the morphism $\M(r,n)\quo \alpha(\C^\times)\rightarrow \mathfrak{g}\quo G$ given by $(A,B,0,j)\mapsto A$. The corresponding homomorphism of algebras is the associated graded of $S(\mathfrak{g}^*)^G\rightarrow \bar{\mathcal{A}}_\lambda(n,r)^{\alpha(\C^\times)}$. We have checked that the associated graded homomorphism of $\varphi_i\circ\iota:S(\mathfrak{g}^*)^G\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$ coincides with that of the embedding $S(\mathfrak{g}^*)^G\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$. This proves the claim of Step 4.
{\it Step 6}. The coincidence of similar homomorphisms $S(\mathfrak{g})^G\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$ is established analogously. The proof of the surjectivity of $\mathsf{C}_\alpha(\bar{\mathcal{A}}_{\lambda}(n,r))\rightarrow \bar{\mathcal{A}}_{\lambda+i}(n,1)$ is now complete. \end{proof}
We still have a Hamiltonian action of $\C^\times$ on $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$ that makes the homomorphism $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))\rightarrow \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))$ equivariant. So we can form the category $\mathcal{O}(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)))$ for this action. By Lemma \ref{Lem:prec_spec}, we have $\alpha\prec^\lambda (m\alpha,1)$ for $m\gg 0$. We rescale $\alpha$ and assume that $m=1$. Recall, Lemma \ref{Lem:parab_ind}, that we have an isomorphism $\mathsf{C}_{1}(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))\cong \mathsf{C}_{(\alpha,1)}(\bar{\mathcal{A}}_\lambda^\theta(n,r))$. So there is a natural bijection between the sets of simples in $\mathcal{O}(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)))$ and in $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$.
\begin{Prop}\label{Prop:simple_numbers} The number of simples in $\mathcal{O}(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)))$ is bigger than or equal to that in $\mathcal{O}(\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r))))$. \end{Prop} \begin{proof} The proof is again in several steps.
{\it Step 1}. We have a natural homomorphism $\C[\mathfrak{g}]^G\rightarrow \bigoplus \bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r)$. It can be described as follows. We have an identification $\C[\mathfrak{g}]^G\cong \C[\mathfrak{h}]^{\mathfrak{S}_n}$. This algebra embeds into $\bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r)$ (that is a spherical Cherednik algebra for the group $\prod_{i=1}^r \mathfrak{S}_{n_i}$ acting on $\mathfrak{h}$) via the inclusion $\C[\mathfrak{h}]^{\mathfrak{S}_n}\subset \C[\mathfrak{h}]^{\mathfrak{S}_{n_1}\times\ldots\times \mathfrak{S}_{n_r}}$. For the homomorphism $\C[\mathfrak{g}]^G\rightarrow \bigoplus \bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r)$ we take the direct sum of these embeddings. Similarly to Steps 4,5 of the proof of Proposition \ref{Prop:surject}, the maps $\C[\mathfrak{g}]^G\rightarrow \mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)), \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))$ are intertwined by the homomorphism $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))\rightarrow \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))$.
{\it Step 2}. Let $\delta\in \C[\mathfrak{g}]^G$ be the discriminant. We claim that $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r))[\delta^{-1}]\xrightarrow{\sim} \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))[\delta^{-1}]$. Since $\delta$ is $\alpha(\C^\times)$-stable, we have $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))[\delta^{-1}]=\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}])$. We will describe the algebra $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}])$ explicitly and see that $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}])\xrightarrow{\sim} \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda^\theta(n,r)))[\delta^{-1}]$.
{\it Step 3.} We start with the description of $\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}]$. Let $\mathfrak{g}^{reg}$ denote the locus of the regular semisimple elements in $\mathfrak{g}$. Then $\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}]=D(\mathfrak{g}^{reg}\times \operatorname{Hom}(\C^n,\C^r))\red_\lambda G$. Here $\red_\lambda$ denotes the quantum Hamiltonian reduction with parameter $\lambda$.
Recall that $\mathfrak{g}^{reg}=G\times_{N_G(\mathfrak{h})}\mathfrak{h}^{reg}$ and so $\mathfrak{g}^{reg}\times \Hom(\C^n,\C^r)=G\times_{N_G(\mathfrak{h})}(\mathfrak{h}^{reg}\times \operatorname{Hom}(\C^n,\C^r))$. It follows that \begin{align*}&D(\mathfrak{g}^{reg}\times \operatorname{Hom}(\C^n,\C^r))\red_\lambda G=D(\mathfrak{h}^{reg}\times \Hom(\C^n,\C^r))\red_\lambda N_G(\mathfrak{h})=\\ &(D(\mathfrak{h}^{reg})\otimes D(\operatorname{Hom}(\C^n,\C^r))\red_\lambda H)^{\mathfrak{S}_n}= \left(D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}\right)^{\mathfrak{S}_n}.\end{align*} Here, in the second line, we write $H$ for the Cartan subgroup of $G$ and take the diagonal action of $\mathfrak{S}_n$. In the last expression, it permutes the tensor factors. A similar argument shows that $\bar{\M}^\theta(n,r)_{\delta}= (T^*(\mathfrak{h}^{reg})\times T^*(\mathbb{P}^{r-1})^{n})/\mathfrak{S}_n$ and the restriction of $\bar{\mathcal{A}}^\theta_\lambda(n,r)$ to this open subset is $\left(D_{\mathfrak{h}^{reg}}\otimes (D^\lambda_{\mathbb{P}^{r-1}})^{\otimes n}\right)^{\mathfrak{S}_n}$.
{\it Step 4}. Now we are going to describe the algebra $\mathsf{C}_\alpha(\left(D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}\right)^{\mathfrak{S}_n})$. First of all, we claim that \begin{equation}\label{eq:Ca_eq}\mathsf{C}_\alpha(\left(D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}\right)^{\mathfrak{S}_n})= (\mathsf{C}_\alpha\left(D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}\right))^{\mathfrak{S}_n}\end{equation} There is a natural homomorphism from the left hand side to the right hand side. To prove that it is an isomorphism one can argue as follows. First, note, that since the $\mathfrak{S}_n$-action on $\mathfrak{h}^{reg}$ is free, we have $$D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}=D(\mathfrak{h}^{reg})\otimes_{D(\mathfrak{h}^{reg})^{\mathfrak{S}_n}}\left(D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}\right)^{\mathfrak{S}_n}$$ Since $D(\mathfrak{h}^{reg})$ is $\alpha(\C^\times)$-invariant, the previous equality implies (\ref{eq:Ca_eq}).
{\it Step 5}. Now let us describe $\mathsf{C}_\alpha(\left(D(\mathfrak{h}^{reg})\otimes D^\lambda(\mathbb{P}^{r-1})^{\otimes n}\right)=D(\mathfrak{h}^{reg})\otimes \mathsf{C}_\alpha\left((D^\lambda(\mathbb{P}^{r-1}))^{\otimes n}\right)$. The $\C^\times$-action on the tensor product $(D^\lambda(\mathbb{P}^{r-1}))^{\otimes n}$ is diagonal and it is easy to see that $\mathsf{C}_\alpha\left((D^\lambda(\mathbb{P}^{r-1}))^{\otimes n}\right)=\left(\mathsf{C}_\alpha(D^\lambda(\mathbb{P}^{r-1}))\right)^{\otimes n}$.
So we need to compute $\mathsf{C}_\alpha(D^\lambda(\mathbb{P}^{r-1}))$. We claim that this algebra is isomorphic to $\C^{\oplus r}$. Indeed, $D^\lambda(\mathbb{P}^{r-1})$ is a quotient of the central reduction $U_{\tilde{\lambda}}(\mathfrak{sl}_r)$ of $U(\mathfrak{sl}_r)$ at the central character $\tilde{\lambda}:=\lambda\omega_{r}$. We remark that $\lambda\omega_r+\rho$ is regular because $\lambda\geqslant 0$. We have $\mathsf{C}_\alpha(U_{\tilde{\lambda}}(\mathfrak{sl}_r))=\C^{\oplus r!}$ and $\mathsf{C}_\alpha(D^\lambda(\mathbb{P}^{r-1}))$ is a quotient of that. The number of irreducible representations of $\mathsf{C}_\alpha(D^\lambda(\mathbb{P}^{r-1}))$ equals to the number of simples in the category $\mathcal{O}$ for $D^\lambda(\mathbb{P}^{r-1})$ that coincides with $r$ since the localization holds. An isomorphism $\mathsf{C}_\alpha(D^\lambda(\mathbb{P}^{r-1}))=\C^{\oplus r}$ follows.
{\it Step 6}. So we see that $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}])= \left(D(\mathfrak{h}^{reg})\otimes (\C^{\oplus r})^{\otimes n}\right)^{\mathfrak{S}_n}$. By similar reasons, we have $\Gamma([\bar{\M}^\theta(n,r)_\delta]^{\alpha(\C^\times)}, \mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))= \left(D(\mathfrak{h}^{reg})\otimes (\C^{\oplus r})^{\otimes n}\right)^{\mathfrak{S}_n}$. The natural homomorphism \begin{equation}\label{eq:local_iso1}\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}])\rightarrow \Gamma((\bar{\M}^\theta(n,r)_\delta)^{\alpha(\C^\times)}, \mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))\end{equation} is an isomorphism by the previous two steps. Also we have a natural homomorphism \begin{equation}\label{eq:local_iso2}\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))[\delta^{-1}]\rightarrow \Gamma([\M^\theta(n,r)_\delta]^{\alpha(\C^\times)}, \mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r))).\end{equation} The latter homomorphism is an isomorphism from the explicit description of $\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r))$. Indeed, $\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r))$ is the direct sum of quantizations of products of Hilbert schemes. The morphism $\prod \operatorname{Hilb}_{n_i}(\C^2) \rightarrow \prod \C^{2n_i}/\mathfrak{S}_n$ is an isomorphism over the non-vanishing locus of $\delta$. This implies that (\ref{eq:local_iso2}) is an isomorphism.
By the construction, (\ref{eq:local_iso1}) is the composition of $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)[\delta^{-1}])\rightarrow \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))[\delta^{-1}]$ and (\ref{eq:local_iso2}). So we have proved that $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))[\delta^{-1}]\rightarrow \Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))[\delta^{-1}]$ is an isomorphism.
{\it Step 7}. For $p\in \bar{\M}^\theta(n,r)^{T\times \C^\times}$ let $L^0(p)$ be the corresponding irreducible $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))$-module from category $\mathcal{O}$. These modules are either finite dimensional
(those are parameterized by the multi-partitions with one part equal to $(n)$ and others empty) or has support of maximal dimension. It follows from Proposition \ref{Prop:surject} that all finite dimensional $L^0(p)$ restrict to pairwise non-isomorphic $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$-modules. Now consider $L^0(p)$ with support of maximal dimension. We claim that the localizations $L^0(p)[\delta^{-1}]$ are pairwise non-isomorphic simple $\Gamma(\mathsf{C}_\alpha(\bar{\mathcal{A}}^\theta_\lambda(n,r)))[\delta^{-1}]$-modules. Let us consider $p=(p^1,\ldots,p^r)$ and $p'=(p'^1,\ldots,p'^r)$ with $|p^i|=|p'^i|$ for all $i$ and show that the corresponding localizations are simple and, moreover, are isomorphic only if $p=p'$. This claim holds if we localize to the regular locus for $\prod_{i=1}^r\mathfrak{S}_{|p^i|}$. Indeed, this localization realizes the KZ functor that is a quotient onto its image. So the images of $L^0(p), L^0(p')$ under this localization are simple and non-isomorphic. Then we further restrict the localizations of $L^0(p),L^0(p')$ to the locus where $x_i\neq x_j$ for all $i,j$. But there is no monodromy of the D-modules $L^0(p)[\delta^{-1}],L^0(p')[\delta^{-1}]$ along those additional hyperplanes and these D-modules have regular singularities everywhere. It follows that they remain simple and nonisomorphic (if $p\neq p'$).
{\it Step 8}. So we see that the $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))[\delta^{-1}]$-modules $L^0(p)[\delta^{-1}]$ are simple and pair-wise non-isomorphic. The $\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r))$-module $L^0(p)$ is not finitely generated a priori but always lies in the ind-completion of the category $\mathcal{O}$ (thanks to the weight decomposition). Pick a finitely generated $\mathsf{C}_\alpha(\mathcal{A}_\lambda(n,r))$-lattice $L^0_1(p)$ for $L^0(p)[\delta^{-1}]$ inside $L^0(p)$. This now an object in the category $\mathcal{O}$. There is a simple constituent $\underline{L}^0(p)$ of $L^0_1(p)$ with $\underline{L}^0(p)[\delta^{-1}]=L^0(p)[\delta^{-1}]$ because the right hand side is simple. The finite dimensional modules $L^0(p)$ together with the modules of the form $\underline{L}^0(p)$ give a required number of pairwise nonisomorphic simple $\mathcal{A}_\lambda(n,r)^0$-modules. \end{proof}
\subsection{Completion of proofs}\label{SS_loc_compl} The following proposition completes the proof of Theorem \ref{Thm:loc}.
\begin{Prop}\label{Prop:loc_compl} Let $\lambda$ be a positive parameter with denominator $n$. Then the abelian localization holds for $(\lambda,\det)$. \end{Prop} \begin{proof} Let $\alpha$ be the one-parameter subgroup $t\mapsto (t^{d_1},\ldots, t^{d_r})$ with $d_1\gg\ldots\gg d_r$. Let $\beta: \C^\times\rightarrow T\times \C^\times$ have the form $t\mapsto (1,t)$. Set $\alpha'=m\alpha+\beta$ for $m\gg 0$. So we have $\alpha\prec^\lambda \alpha'$ for all $\lambda$ thanks to Lemma \ref{Lem:prec_spec}.
Since $\Gamma_\lambda^\theta: \mathcal{O}_{\alpha'}(\bar{\mathcal{A}}^\theta_\lambda(n,r)) \rightarrow \mathcal{O}_{\alpha'}(\bar{\mathcal{A}}_\lambda(n,r))$ is a quotient functor, to prove that it is an equivalence it is enough to verify that the number of simples in these two categories is the same. The number of simples in $\mathcal{O}_{\alpha'}(\bar{\mathcal{A}}_\lambda(n,r))$ coincides with that for $\mathcal{O}(\mathsf{C}_\alpha(\bar{\mathcal{A}}_\lambda(n,r)))$ thanks to Lemma \ref{Lem:parab_ind}. The latter is bigger than or equal to the number of simples for $\mathcal{O}(\bigoplus \bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r))$ that, in its turn coincides with the number of the $r$-multipartitions of $n$ because the abelian localization holds for all summands $\bar{\mathcal{A}}_\lambda(n_1,\ldots,n_r;r)$. We deduce that the number of simples in $\mathcal{O}_{\alpha'}(\bar{\mathcal{A}}^\theta_\lambda(n,r))$ and in $\mathcal{O}_{\alpha'}(\bar{\mathcal{A}}_\lambda(n,r))$ coincide. So we see that $\Gamma^\theta_\lambda:\mathcal{O}_{\alpha'}(\bar{\mathcal{A}}^\theta_\lambda(n,r))\twoheadrightarrow \mathcal{O}_{\alpha'}(\bar{\mathcal{A}}_\lambda(n,r))$ is an equivalence. Now we are going to show that this implies that $\Gamma^\theta_\lambda:\bar{\mathcal{A}}_\lambda^\theta(n,r)\operatorname{-mod} \rightarrow \bar{\mathcal{A}}_\lambda(n,r)\operatorname{-mod}$ is an equivalence. Below we write $\mathcal{O}$ instead of $\mathcal{O}_{\alpha'}$.
Since $\Gamma_\lambda^\theta $ is an equivalence between the categories $\mathcal{O}$, we see that $\bar{\mathcal{A}}^{(\det)}_{\lambda,\chi}(n,r)\otimes_{\bar{\mathcal{A}}_\lambda(n,r)}\bullet$ and $\bar{\mathcal{A}}^{(\det)}_{\lambda+\chi,-\chi}(n,r)\otimes_{\bar{\mathcal{A}}_{\lambda+\chi}(n,r)}\bullet$ are mutually inverse equivalences between $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$ and $\mathcal{O}(\bar{\mathcal{A}}_{\lambda+\chi}(n,r))$ for $\chi\gg 0$. Set $\mathcal{B}:=\bar{\mathcal{A}}^{(\det)}_{\lambda+\chi,-\chi}(n,r)\otimes_{\bar{\mathcal{A}}_{\lambda+\chi}(n,r)}\bar{\mathcal{A}}^{(\det)}_{\lambda,\chi}(n,r)$. This is a HC $\bar{\mathcal{A}}_\lambda(n,r)$-bimodule with a natural homomorphism to $\bar{\mathcal{A}}_\lambda(n,r)$ such that the induced homomorphism $\mathcal{B}\otimes_{\bar{\mathcal{A}}_\lambda(n,r)}M\rightarrow M$ is an isomorphism for any $M\in \mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))$. It follows from \cite[Proposition 5.15]{BL} that the kernel and the cokernel of $\mathcal{B}\rightarrow \bar{\mathcal{A}}_\lambda(n,r)$ have proper associated varieties and hence are finite dimensional. Let $L$ denote an irreducible finite dimensional $\bar{\mathcal{A}}_\lambda(n,r)$-module, it is unique because of the equivalence $\mathcal{O}(\bar{\mathcal{A}}_\lambda(n,r))\cong \mathcal{O}(\bar{\mathcal{A}}_{\lambda+\chi}(n,r))$. Since the homomorphism $\mathcal{B}\otimes_{\bar{\mathcal{A}}_\lambda(n,r)}L\rightarrow L$ is an isomorphism, we see that $\mathcal{B}\twoheadrightarrow \bar{\mathcal{A}}_\lambda(n,r)$. Let $K$ denote the kernel. We have an exact sequence $$\operatorname{Tor}^1_{\bar{\mathcal{A}}_\lambda(n,r)}(\bar{\mathcal{A}}_\lambda(n,r),L)\rightarrow K\otimes_{\bar{\mathcal{A}}_\lambda(n,r)}L \rightarrow \mathcal{B}\otimes_{\bar{\mathcal{A}}_\lambda(n,r)}L\rightarrow L\rightarrow 0$$ Clearly, the first term is zero, while the last homomorphism is an isomorphism. We deduce that $K\otimes_{\bar{\mathcal{A}}_\lambda(n,r)}L=0$. But $K$ is a finite dimensional $\bar{\mathcal{A}}_\lambda(n,r)$-bimodule and hence a $\bar{\mathcal{A}}_\lambda(n,r)/\operatorname{Ann}L$-bimodule and so its tensor product with $L$ can only be zero if $K=0$.
So we see that $\bar{\mathcal{A}}^{(\det)}_{\lambda+\chi,-\chi}(n,r)\otimes_{\bar{\mathcal{A}}_{\lambda+\chi}(n,r)}\bar{\mathcal{A}}^{(\det)}_{\lambda,\chi}(n,r)\cong \bar{\mathcal{A}}_{\lambda}(n,r)$. Similarly, $\bar{\mathcal{A}}^{(\det)}_{\lambda,\chi}(n,r)\otimes_{\bar{\mathcal{A}}_{\lambda}(n,r)}\bar{\mathcal{A}}^{(\det)}_{\lambda+\chi,-\chi}(n,r)\cong \bar{\mathcal{A}}_{\lambda+\chi}(n,r)$. It follows that $\Gamma_\lambda^\theta$ is an equivalence $\bar{\mathcal{A}}_\lambda^\theta(n,r)\operatorname{-mod}\cong \bar{\mathcal{A}}_\lambda(n,r)\operatorname{-mod}$. \end{proof}
Now we can complete the proof of (2) of Theorem \ref{Thm:fin dim}. It remains to show that $\bar{\mathcal{A}}_\lambda(n,r)$ with $-r<\lambda<0$ has no finite dimensional irreducible representations. Assume the converse, let $L$ denote a finite dimensional irreducible representation. Since $L\Loc_\lambda^\theta(\bar{\mathcal{A}}_\lambda(n,r))=\bar{\mathcal{A}}_\lambda^\theta(n,r)$ and $R\Gamma_\lambda^\theta(\bar{\mathcal{A}}^\theta_\lambda(n,r))=\bar{\mathcal{A}}_\lambda(n,r)$, we see that $R\Gamma_\lambda^\theta\circ L\Loc_\lambda^\theta$ is the identity functor of $D^-(\bar{\mathcal{A}}_\lambda(n,r)\operatorname{-mod})$. The homology of $L\Loc_\lambda^\theta(L)$ are supported on $\bar{\rho}^{-1}(0)$. It follows that the denominator of $\lambda$ is $n$.
Recall that $\Gamma_\lambda^\theta$ is an exact functor. Since $R\Gamma_\lambda^\theta\circ L\Loc_\lambda^\theta$ is the identity, the functor $\Gamma_\lambda^\theta$ does not kill the simple $\bar{\mathcal{A}}_\lambda^\theta(n,r)$-module $\tilde{L}$ supported on $\bar{\rho}^{-1}(0)$. On the other hand, $\Gamma_\lambda^\theta$ does not kill modules whose support intersects $\bar{\M}^\theta(n,r)^{reg}$, the open subvariety in $\bar{\M}^\theta(n,r)$, where $\bar{\rho}$ is an isomorphism. In fact, every simple in $\mathcal{O}(\bar{\mathcal{A}}_\lambda^\theta(n,r))$ is either supported on $\bar{\rho}^{-1}(0)$ or its support intersects $\bar{\M}^\theta(n,r)^{reg}$. This is true when $\lambda'$ has denominator $n$ and satisfies the abelian localization theorem. Indeed, every module in $\mathcal{O}(\bar{\mathcal{A}}_{\lambda'}(n,r))$ is strictly holonomic. So if it has support of dimension $rn-1$, then this support intersects regular locus, if not, the module is finite dimensional. Our claim about $\bar{\mathcal{A}}_\lambda^\theta(n,r)$-modules follows.
So we see that $\Gamma_\lambda^\theta$ does not kill any irreducible module in $\mathcal{O}(\bar{\mathcal{A}}^\theta_\lambda(n,r))$. So it is an equivalence. However, the proof of Proposition \ref{Prop:loc_compl} shows that this is impossible. This completes the proof of (2) of Theorem \ref{Thm:fin dim}.
\section{Affine wall-crossing and counting}\label{S_aff_wc_count} The main goal of this section is to prove Theorem \ref{Thm:counting}. As in \cite[Section 8]{BL}, the proof follows from the claim that the wall-crossing functor through the wall $\delta=0$ (the affine wall-crossing functor) is a perverse equivalence with homological shifts less than $\dim \M^\theta(v,w)$. As was pointed out in \cite[9.2]{BL}, this follows from results that we have already proved and the following claims yet to be proved. \begin{itemize} \item[(i)] Let $\mathcal{L}$ be a symplectic leaf in $\M(v,w)$. Consider the categories $\HC_{fin}(\underline{\bar{\mathcal{A}}}_{\hat{\param}}(\underline{v},\underline{w}))$ of HC bimodules over the corresponding slice algebra $\underline{\bar{\mathcal{A}}}_{\hat{\param}}(\underline{v},\underline{w})$ that are finitely generated (left and right) modules over $\C[\hat{\param}]$ and $\HC_{\overline{\mathcal{L}}}(\mathcal{A}_{\hat{\param}}(v,w))\subset \HC_{\mathcal{L}\cup Y}(\mathcal{A}_{\hat{\param}}(v,w))$ of Harish-Chandra bimodules supported on $\overline{\mathcal{L}}$ and on $\mathcal{L}\cup Y$, where we write $Y$ for the union of all leaves that do not contain $\mathcal{L}$ in their closure. Then, for $x\in \mathcal{L}$, there is a functor $\bullet^{\dagger,x}: \HC_{fin}(\underline{\bar{\mathcal{A}}}_{\hat{\param}}(\underline{v},\underline{w})) \rightarrow \HC_{\overline{\mathcal{L}}}(\mathcal{A}_{\hat{\param}}(v,w))$ that is right adjoint to $\bullet_{\dagger,x}:\HC_{\mathcal{L}\cup Y}(\mathcal{A}_{\hat{\param}}(v,w))\rightarrow \HC_{fin}(\underline{\bar{\mathcal{A}}}_{\hat{\param}}(\underline{v},\underline{w}))$. \item[(ii)] Theorem \ref{Thm:ideals} holds together with a direct analog of \cite[Lemma 5.21]{BL}. \item[(iii)] For a unique proper ideal $\J\subset \bar{\mathcal{A}}_\lambda(n,r)$, where $\lambda$ has denominator $n$ and lies outside $(-r,0)$, we have $\operatorname{Tor}^i_{\bar{\mathcal{A}}_\lambda(n,r)}(\bar{\mathcal{A}}_\lambda(n,r)/\J, \bar{\mathcal{A}}_\lambda(n,r)/\J)=\bar{\mathcal{A}}_\lambda(n,r)/\J$ if $i$ is even, between $0$ and $2nr-2$, and 0 otherwise. \item[(iv)] The functor $\bullet_{\dagger,x}$ is faithful for $x$ and $\lambda$ specified below. \end{itemize} We take a Weil generic $\lambda$ on the hyperplane of the form $\langle \delta,\cdot\rangle=\kappa$, where $\kappa$ is a fixed rational number with denominator $n'$. The choice of $x$ is as follows. Recall the description of the symplectic leaves of $\M(v,w)$ in Subsection \ref{SS_leaves}. We want $x$ to lie in the leaf corresponding to the decomposition $r^0\oplus (r^1)^{\oplus n'} \oplus (r^2)^{\oplus n'}\oplus\ldots\oplus (r^q)^{n'}\oplus r^{q+1}\oplus\ldots\oplus r^{n-q(n'-1)}$, where $\dim r^i=\delta, i=1,\ldots,q,q=\lfloor n/n'\rfloor$ and $n$ is given by $$n:=\lfloor \frac{w\cdot v-(v,v)/2}{w\cdot \delta}\rfloor$$ so that $n$ is maximal with the property that $v^0=v-n\delta$ is a root of $Q^w$. We remark that the slice to $x$ is $\M(n',r)^{q}$, where $r$ is given by $r:=w\cdot \delta$.
In the proof, it is enough to assume that $\nu$ is dominant. If this is not true, we can find an element $\sigma$ of $W(Q)$ such that $\sigma\nu$ is dominant, then we have a quantum LMN isomorphism $\mathcal{A}_\lambda(v,w)\cong \mathcal{A}_{\sigma\cdot \lambda}(\sigma\cdot v, w)$ and $\langle\lambda,\delta\rangle= \langle\sigma\cdot\lambda, \delta\rangle$.
\subsection{Functor $\bullet^{\dagger,x}$}\label{S_fun_up_dag} In this subsection we establish (i) above in greater generality.
Here we assume that $X\rightarrow X_0$ is a conical symplectic resolution whose all slices are conical and satisfy the additional assumption on contracting $\C^\times$-actions from Subsection \ref{SS_HC_bimod}. Recall that under these assumptions, for a point $x\in X_0$, we can define the exact functor $\bullet_{\dagger,x}: \HC(\mathcal{A}_\param)\rightarrow \HC(\underline{\mathcal{A}}_\param)$, where we write $\param$ for $H^2(X,\C)$. In this subsection we are going to study its adjoint $\bullet^{\dagger,x}:\HC(\underline{\mathcal{A}}_\param)\rightarrow \widetilde{\HC}(\mathcal{A}_\param)$. Here $\widetilde{\HC}(\mathcal{A}_\param)$ denotes the category of $\mathcal{A}_\param$-bimodules that are sums of their HC subbimodules. The construction of the functor is similar to \cite{HC,sraco,W_dim}. Namely, we pick a HC $\underline{\mathcal{A}}_{\param}$-bimodule $\mathcal{N}$, form the Rees bimodule $\mathcal{N}_\hbar$ and its completion $\mathcal{N}_\hbar^{\wedge_0}$ at $0$. Then form the $\mathcal{A}_{\param,\hbar}$-bimodule $\M_\hbar$ that is the sum of all HC subbimodules in $\mathbf{A}_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}\mathcal{N}_\hbar^{\wedge_0}$ (this includes the condition that the Euler derivation acts locally finitely). It is easy to see that if $\hbar m\in \M_\hbar$, then $m\in \M_\hbar$. Then we set $\mathcal{N}^{\dagger,x}:=\mathcal{M}_\hbar/(\hbar-1)\mathcal{M}_\hbar$. Similarly to \cite[4.1.4]{W_dim}, we see that $\mathcal{N}\mapsto \mathcal{N}^{\dagger,x}$ is a functor and that $\operatorname{Hom}(\M,\mathcal{N}^{\dagger,x})=\operatorname{Hom}(\M_{\dagger,x},\mathcal{N})$.
Our main result about the functor $\bullet^{\dagger,x}$ is the following claim.
\begin{Prop}\label{Prop:up_dag_prop} If $\mathcal{N}$ is finitely generated over $\C[\param]$, then $\mathcal{N}^{\dagger,x}\in \HC(\mathcal{A}_\param)$ and $\operatorname{V}(\mathcal{N}^{\dagger,x})=\overline{\mathcal{L}}$. Here $\mathcal{L}$ stands for the symplectic leaf of $X$ containing $x$. \end{Prop} \begin{proof} The proof is similar to analogous proofs in \cite[3.3,3.4]{HC},\cite[3.7]{sraco}. As in those proofs, it is enough to show that if $\mathcal{M}$ is a Poisson $\C[\mathcal{L}]^{\wedge_x}$-module of finite rank equipped with an Euler derivation, then the maximal Poisson $\C[\mathcal{L}]$-submodule of $\mathcal{M}$ that is the sum of its finitely generated Poisson $\C[\overline{\mathcal{L}}]$-submodules (with locally finite action of the Euler derivation) is finitely generated. By an Euler derivation, we mean an endomorphism $\mathsf{eu}$ of $\mathcal{M}$ such that \begin{itemize} \item $\mathsf{eu}(am)=(\mathsf{eu}a)m+a(\mathsf{eu}m)$, \item $\mathsf{eu}\{a,m\}=\{\mathsf{eu}a, m\}+\{a, \mathsf{eu}m\}-d\{a,m\}$. \end{itemize} Here by $\mathsf{eu}$ on $\C[\mathcal{L}]^{\wedge_x}$ we mean the derivation induced by the contracting $\C^\times$-action.
{\it Step 1}. First, according to Namikawa, \cite{Namikawa_fund}, the algebraic fundamental group $\pi^{alg}_1(\mathcal{L})$ is finite. Let $\widetilde{\mathcal{L}}$ be the corresponding Galois covering of $\mathcal{L}$. Being the integral closure of $\C[\overline{\mathcal{L}}]$ in a finite extension of $\C(\mathcal{L})$, the algebra $\C[\widetilde{\mathcal{L}}]$ is finite over $\C[\overline{\mathcal{L}}]$. The group $\pi_1(\widetilde{\mathcal{L}})$ has no homomorphisms to $\operatorname{GL}_m$ by the choice of $\widetilde{\mathcal{L}}$. Also let us note that the $\C^\times$-action on $\mathcal{L}$ lifts to a $\C^\times$-action on $\widetilde{\mathcal{L}}$ possibly after replacing $\C^\times$ with some covering torus. We remark that the action produces a positive grading on $\C[\widetilde{\mathcal{L}}]$.
{\it Step 2}. Let $\mathcal{V}$ be a weakly $\C^\times$-equivariant $D_{\widetilde{\mathcal{L}}}$-module. We claim that $\mathcal{V}$ is the sum of several copies of $\mathcal{O}_{\widetilde{\mathcal{L}}}$. Indeed, this is so in the analytic category: $\mathcal{V}^{an}:=\mathcal{O}^{an}_{\widetilde{\mathcal{L}}}\otimes_{\mathcal{O}_{\widetilde{\mathcal{L}}}}\mathcal{V}\cong \mathcal{O}_{\widetilde{\mathcal{L}}}^{an}\otimes\mathcal{V}^{fl}$ (where the superscript ``fl'' means flat sections) because of the assumption on $\pi_1(\widetilde{\mathcal{L}})$. But then the space $\mathcal{V}^{fl}$ carries a holomorphic $\C^\times$-action that has to be diagonalizable and by characters. So we have an embedding $\Gamma(\mathcal{V})\hookrightarrow \Gamma(\mathcal{V}^{an})^{\C^\times-fin}$. Since $\operatorname{Spec}(\C[\widetilde{\mathcal{L}}])$ is normal, any analytic function on $\widetilde{\mathcal{L}}$ extends to $\operatorname{Spec}(\C[\widetilde{\mathcal{L}}])$. Since the grading on $\C[\widetilde{\mathcal{L}}]$ is positive, any holomorphic $\C^\times$-semiinvariant function must be polynomial. So the embedding above reduces to $\Gamma(\mathcal{V})\hookrightarrow \C[\widetilde{\mathcal{L}}]\otimes \mathcal{V}^{fl}$. The generic rank of $\Gamma(\mathcal{V})$ coincides with $\dim \mathcal{V}^{fl}$. Since the module $\C[\widetilde{\mathcal{L}}]\otimes \mathcal{V}^{fl}$ has no torsion, we see that $\Gamma(\mathcal{V})= \C[\widetilde{\mathcal{L}}]\otimes \mathcal{V}^{fl}$. It follows that $\mathcal{V}=\mathcal{O}_{\widetilde{\mathcal{L}}}\otimes \mathcal{V}^{fl}$ as a D-module and the claim of this step follows.
{\it Step 3}. Let $Y$ be a symplectic variety. We claim that a Poisson $\mathcal{O}_{Y}$-module carries a canonical structure of a $D_{Y}$-module and vice versa. If $\mathcal{N}$ is a $D_Y$-module, then we equip it with the structure of a Poisson module via $\{f,n\}:=v(f)n$. Here $f,n$ are local sections of $\mathcal{O}_Y, \mathcal{N}$, respectively, and $v(f)$ is the skew-gradient of $f$, a vector field on $Y$. Let us now equip a Poisson module with a canonical D-module structure. It is enough to do this locally, so we may assume that there is an etale map $Y\rightarrow \C^k$. Let $f_1,\ldots,f_k$ be the corresponding etale coordinates. Then we set $v(f)n:=\{f,n\}$. This defines a D-module structure on $\mathcal{N}$ that is easily seen to be independent of the choice of an \'{e}tale chart.
Let us remark that a weakly $\C^\times$-equivariant Poisson module gives rise to a weakly $\C^\times$-equivariant D-module and vice versa.
So the conclusion of the previous 3 steps is that every weakly $\C^\times$-equivariant Poisson $\mathcal{O}_{\widetilde{\mathcal{L}}}$-module is the direct sum of several copies of $\mathcal{O}_{\widetilde{\mathcal{L}}}$.
{\it Step 4}. Pick a point $\tilde{x}\in \widetilde{\mathcal{L}}$ lying over $x$ so that $\widetilde{\mathcal{L}}^{\wedge_{\tilde{x}}}$ is naturally identified with $\mathcal{L}^{\wedge_x}$. Of course, any Poisson module over $\widetilde{\mathcal{L}}^{\wedge_{\tilde{x}}}$ is the direct sum of several copies of $\C[\widetilde{\mathcal{L}}]^{\wedge_{\tilde{x}}}$. So the claim in the beginning of the proof will follow if we check that any finitely generated Poisson $\C[\widetilde{\mathcal{L}}]$-bimodule in $\C[\widetilde{\mathcal{L}}]^{\wedge_{\tilde{x}}}$ with locally finite $\mathsf{eu}$-action coincides with $\C[\widetilde{\mathcal{L}}]$. For this, let us note that the Poisson center of $\C[\widetilde{\mathcal{L}}]^{\wedge_{\tilde{x}}}$ coincides with $\C$. On the other hand, any finitely generated Poisson submodule with locally finite action of $\mathsf{eu}$ is the sum of weakly $\C^\times$-equivariant Poisson submodules. The latter have to be trivial and so are generated by the Poisson central elements. This implies our claim and completes the proof.
\end{proof}
We will also need to consider a map between the sets of two-sided ideals $\mathfrak{Id}(\underline{\mathcal{A}})\rightarrow \mathfrak{Id}(\mathcal{A})$ induced by the functor $\bullet^{\dagger,x}$, compare to \cite{wquant,HC,sraco}. Namely, for $\mathcal{I}\in \mathfrak{Id}(\underline{\mathcal{A}})$ we write $\mathcal{I}^{\dagger_\mathcal{A},x}$ for the kernel of the natural map $\mathcal{A}\rightarrow (\underline{\mathcal{A}}/\mathcal{I})^{\dagger,x}$. Alternatively, the ideal $\mathcal{I}^{\dagger_\mathcal{A},x}$ can be obtained as follows. Consider the ideal $\W_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}\mathcal{I}^{\wedge_0}_{\hbar}\subset \mathcal{A}_\hbar^{\wedge_x}$. Set $\J_\hbar:=\W_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}\mathcal{I}^{\wedge_0}_{\hbar}\cap \mathcal{A}_\hbar$. This is a $\C^\times$-stable $\hbar$-saturated ideal in $\mathcal{A}_\hbar$ and we set $\I^{\dagger_\mathcal{A},x}:=\J_{\hbar}/(\hbar-1)\J_{\hbar}$.
We will need some properties of the map $\mathfrak{Id}(\underline{\mathcal{A}})\rightarrow \mathfrak{Id}(\mathcal{A})$ analogous to those established in \cite[Theorem 1.2.2]{wquant}.
\begin{Prop}\label{Prop:map_ideal_prop} The following is true. \begin{enumerate} \item $\J\subset (\J_{\dagger,x})^{\dagger_\mathcal{A},x}$ for all $\J\in \mathfrak{Id}(\mathcal{A})$ and $(\I^{\dagger_\mathcal{A},x})_{\dagger,x}\subset \I$ for all $\I\in \mathfrak{Id}(\underline{\mathcal{A}})$. \item We have $\mathcal{I}_1^{\dagger_\mathcal{A},x}\cap \mathcal{I}_2^{\dagger_\mathcal{A},x}=(\mathcal{I}_1\cap \mathcal{I}_2)^{\dagger_\mathcal{A},x}$. \item If $\mathcal{I}$ is prime, then so is $\mathcal{I}^{\dagger_\mathcal{A},x}$. \end{enumerate} \end{Prop} \begin{proof} (1) and (2) follow from the alternative definition of $\I^{\dagger_\mathcal{A},x}$ given above. The proof of (3) closely follows that of an analogous statement, \cite[Theorem 1.2.2,(iv)]{wquant}, let us provide a proof for readers convenience. It is easy to see that the ideals $\I_{\hbar},\I_\hbar^{\wedge_0}, \W^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}\mathcal{I}^{\wedge_0}_{\hbar}$ are prime because of the bijections between the sets of two-sided ideals in $\underline{\mathcal{A}}, \underline{\mathcal{A}}_\hbar, \underline{\mathcal{A}}_\hbar^{\wedge_0}, \W_\hbar^{\wedge_0}\widehat{\otimes}_{\C[[\hbar]]}\underline{\mathcal{A}}^{\wedge_0}_\hbar$ (we only consider the $\C^\times$-stable $\hbar$-saturated ideals in the last three algebras).
So we need to show that the intersection $\J_\hbar$ of a $\C^\times$-stable $\hbar$-saturated prime ideal $\mathcal{I}'_\hbar\subset \mathcal{A}^{\wedge_x}_\hbar$ with $\mathcal{A}_\hbar$ is prime. Assume the converse, let there exist ideals $\J^1_\hbar,\J^2_\hbar\supsetneq \J_\hbar$ such that $\J^1_\hbar \J^2_\hbar\subset \J_\hbar$. We may assume that both $\J^i_\hbar$ are $\C^\times$-stable and $\hbar$-saturated. Indeed, if they are not $\hbar$-saturated, then we can saturate them. To see that they can be taken $\C^\times$-stable one can argue as follows. The radical of $\J_\hbar$ is $\C^\times$-stable and so we can take appropriate powers of the radical for $\J^1_\hbar,\J^2_{\hbar}$ if $\J_\hbar$ is not semiprime. If $\J_\hbar$ is semiprime, then its associated prime ideals are $\C^\times$-stable and we can take their appropriate intersections for $\J^1_\hbar,\J^2_\hbar$.
So let us assume that $\J^1_\hbar,\J^2_\hbar$ are $\hbar$-saturated and $\C^\times$-stable. Then so are $(\J^1_\hbar)^{\wedge_x},(\J^2_\hbar)^{\wedge_x}$. Also let us remark that $(\J^1_\hbar)^{\wedge_x}(\J^2_\hbar)^{\wedge_x}= (\J^1_\hbar \J^2_\hbar)^{\wedge_x}\subset \J_\hbar^{\wedge_x}\subset \I'_\hbar$. Without loss of generality, we may assume that $(\J^1_\hbar)^{\wedge_x}\subset \I'_\hbar$. It follows that $\J^1_\hbar\subset \J_\hbar=\mathcal{A}_\hbar\cap \I'_\hbar$, and we are done. \end{proof}
\subsection{Two-sided ideals in $\bar{\mathcal{A}}_\lambda(n,r)$}\label{SS_two_sid_id} The goal of this subsection is to prove Theorem \ref{Thm:ideals} and more technical statements in (ii) above. We use the following notation. We write $\mathcal{A}$ for $\bar{\mathcal{A}}_\lambda(n,r)$ and write $\underline{\mathcal{A}}$ for $\bar{\mathcal{A}}_\lambda(n',r)$, where $n'$ is the denominator of $\lambda$.
Let us start with the description of the two-sided ideals in $\underline{\mathcal{A}}$.
\begin{Lem}\label{Lem:ideals_easy} There is a unique proper ideal in $\underline{\mathcal{A}}$. \end{Lem} \begin{proof} We have seen in the proof of Theorem \ref{Thm:fin dim} that the proper slice algebras for $\underline{\mathcal{A}}$ have no finite dimensional representations. So every ideal $\J\subset \underline{\mathcal{A}}$ is either of finite codimension or $\operatorname{V}(\underline{\mathcal{A}}/\J)=\bar{\M}(n',r)$. The algebra $\underline{\mathcal{A}}$ has no zero divisors so the second option is only possible when $\J=\{0\}$. Now suppose that $\J$ is of finite codimension. Then $\underline{\mathcal{A}}/\J$ (viewed as a left $\underline{\mathcal{A}}$-module) is the sum of several copies of the finite dimensional irreducible $\underline{\mathcal{A}}$-module. So $\J$ coincides with the annihilator of the finite dimensional irreducible module, and we are done. \end{proof}
Let $\underline{\J}$ denote the unique two-sided ideal.
Now we are going to describe the two-sided ideals in $\underline{\mathcal{A}}^{\otimes k}$. For this we need some notation. Set $\underline{\I}_i:=\underline{\mathcal{A}}^{\otimes i-1}\otimes \underline{\J}\otimes \underline{\mathcal{A}}^{\otimes k-i-1}$. For a subset $\Lambda\subset \{1,\ldots,k\}$ define the ideals $\underline{\I}_{\Lambda}:=\sum_{i\in \Lambda} \underline{\I}_i, \underline{\I}^{\Lambda}:=\prod_{i\in \Lambda} \underline{\I}_i$.
Recall that a collection of subsets in $\{1,\ldots,k\}$ is called an {\it anti-chain} if none of these subsets is contained in another. Also recall that an ideal $I$ in an associative algebra $A$ is called {\it semi-prime} if it is the intersection of prime ideals.
\begin{Lem}\label{Lem:ideals_next} The following is true. \begin{enumerate} \item The prime ideals in $\underline{\mathcal{A}}^{\otimes k}$ are precisely the ideals $\underline{\I}_\Lambda$. \item For every ideal $\I\subset \underline{\mathcal{A}}^{\otimes k}$, there is a unique anti-chain $\Lambda_1,\ldots,\Lambda_q$ of subsets in $\{1,\ldots,k\}$ such that $\I=\bigcap_{i=1}^p \I_{\Lambda_i}$. In particular, every ideal is semi-prime. \item For every ideal $\I\subset \underline{\mathcal{A}}^{\otimes k}$, there is a unique anti-chain $\Lambda_1',\ldots,\Lambda_q'$ of subsets of $\{1,\ldots,k\}$ such that $\I=\sum_{i=1}^q \I^{\Lambda'_i}$. \item The anti-chains in (2) and (3) are related as follows: from an antichain in (2), we form all possible subsets containing an element from each of $\Lambda_1,\ldots,\Lambda_p$. Minimal such subsets form an anti-chain in (3). \end{enumerate} \end{Lem} The proof essentially appeared in \cite[5.8]{sraco}. \begin{proof} Let us prove (1). Let $\I$ be a prime ideal. Let $x$ be a generic point in an open leaf $\mathcal{L}\subset\operatorname{V}(\underline{\mathcal{A}}^{\otimes k}/\I)$ of maximal dimension. The corresponding slice algebra $\underline{\mathcal{A}}'$ has a finite dimensional irreducible and so is again the product of several copies of $\underline{\mathcal{A}}$. The leaf $\mathcal{L}$ is therefore the product of one-point leaves and full leaves in $\bar{\M}(n',r)^k$. An irreducible finite dimensional representation of $\underline{\mathcal{A}}'$ is unique, let $\I'$ be its annihilator. Then $\I\subset \I'^{\dagger_{\underline{\mathcal{A}}^{\otimes k}},x}$. By Proposition \ref{Prop:up_dag_prop}, $\operatorname{V}(\underline{\mathcal{A}}^{\otimes k}/\I'^{\dagger_{\underline{\mathcal{A}}^{\otimes k}},x})=\overline{\mathcal{L}}$. It follows from \cite[Corollar 3.6]{BoKr} that $\I=\I'^{\dagger_{\underline{\mathcal{A}}^{\otimes k}},x}$. So the number of the prime ideals coincides with that of the non-empty subsets $\{1,\ldots,k\}$. On the other hand, the ideals $\I^\Lambda$ are all different (they have different associated varieties) and all prime (the quotient $\underline{\mathcal{A}}^{\otimes k}/\I^{\Lambda}$ is the product of a matrix algebra and the algebra
$\underline{\mathcal{A}}^{\otimes k-|\Lambda|}$ that has no zero divisors).
Let us prove (2) (and simultaneously (3)). Let us write $\I_{\Lambda_1,\ldots,\Lambda_p}$ for $\bigcap_{j=1}^s \I_{\Lambda_j}$. For ideals in $\underline{\mathcal{A}}^{\otimes k-1}$ we use notation like $\underline{\I}_{\Lambda_1',\ldots,\Lambda_q'}$. Reordering the indexes, we may assume that $k\in \Lambda_1,\ldots, \Lambda_s$ and $k\not\in \Lambda_{s+1},\ldots,\Lambda_p$. Set $\Lambda_j':=\Lambda_j\setminus \{k\}$ for $j\leqslant s$. Then \begin{equation}\label{eq:ideal1}\I_{\Lambda_1,\ldots,\Lambda_p}=(\underline{\mathcal{A}}^{\otimes k-1}\otimes \underline{\J}+ \underline{\I}_{\Lambda'_1,\ldots,\Lambda'_s}\otimes \underline{\mathcal{A}})\cap (\underline{\I}_{\Lambda_{s+1},\ldots,\Lambda_p}\otimes \underline{\mathcal{A}}).\end{equation} We claim that the right hand side of (\ref{eq:ideal1}) coincides with \begin{equation}\label{eq:ideal2} \underline{\I}_{\Lambda_{s+1},\ldots,\Lambda_p}\otimes \underline{\J}+\underline{\I}_{\Lambda_1',\ldots,\Lambda_s',\Lambda_{s+1},\ldots,\Lambda_p}\otimes \underline{\mathcal{A}}. \end{equation} First of all, we notice that (\ref{eq:ideal2}) is contained in (\ref{eq:ideal1}). So we only need to prove the opposite inclusion. The projection of (\ref{eq:ideal1}) to $\underline{\mathcal{A}}^{\otimes k-1}\otimes \underline{\mathcal{A}}/\underline{\J}$ is contained in $\underline{\I}_{\Lambda_1',\ldots,\Lambda_s',\Lambda_{s+1},\ldots,\Lambda_p}$ and hence also in the projection of (\ref{eq:ideal2}). Also the intersection of (\ref{eq:ideal1}) with $\underline{\mathcal{A}}^{\otimes k-1}\otimes \underline{\J}$ is contained in $\underline{\I}_{\Lambda_{s+1},\ldots,\Lambda_p}\otimes \underline{\J}$. So (\ref{eq:ideal1}) is included into (\ref{eq:ideal2}).
Repeating this argument with the two summands in (\ref{eq:ideal2}) and other factors of $\underline{\mathcal{A}}^{\otimes k}$ we conclude that $\I_{\Lambda_1,\ldots,\Lambda_p}=\sum_j \I^{\Lambda'_j}$, where the subsets $\Lambda'_j\subset \{1,\ldots,k\}$ are formed as described in (4). So we see that the ideals (2) are the same as the ideals in (3) and that (4) holds. What remains to do is to prove that every ideal has the form described in (2). To start with, we notice that every semi-prime ideal has the form as in (2) because of (1). In particular, the radical of any ideal has such form.
Clearly, $\I^{\Lambda'_1}\I^{\Lambda'_2}=\I^{\Lambda'_1\cup\Lambda'_2}$. So it follows any sum of the ideals $\I^{\Lambda_j'}$ coincides with its square. So if $\I$ is an ideal whose radical is $\I_{\Lambda_1,\ldots,\Lambda_p}$, then $\I$ coincides with its radical. This completes the proof. \end{proof}
Now we are ready to establish a result that will imply Theorem \ref{Thm:ideals} together with technical results required in (ii). Let $x_i\in \bar{\M}(n,r)$ be a point corresponding to the leaf with slice $\bar{\M}(n',r)^{i}$ (i.e. to the semisimple representations of the form $r^0\oplus (r^1)^{n'}\oplus\ldots\oplus (r^i)^{n'}$). We set $\J_i:=\I^{\dagger_{\mathcal{A}}, x_i}$, where $\I$ is the maximal ideal in $\underline{\mathcal{A}}^{\otimes i}$, equivalently, the annihilator of the finite dimensional irreducible representation.
\begin{Prop}\label{Prop:ideals_techn} The ideals $\J_i, i=1,\ldots,q$, have the following properties. \begin{enumerate} \item The ideal $\J_i$ is prime for any $i$. \item $\operatorname{V}(\mathcal{A}/\J_i)=\overline{\mathcal{L}}_i$, where $\mathcal{L}_i$ is the symplectic leaf containing $x_i$. \item $\J_1\subsetneq \J_2\subsetneq\ldots\subsetneq \J_q$. \item Any proper two-sided ideal in $\mathcal{A}$ is one of $\J_i$.
\item We have $(\J_i)_{\dagger,x_j}=\underline{\mathcal{A}}^{\otimes j}$ if $j<i$ and $(\J_i)_{\dagger,x_j}=\sum_{|\Lambda|=j-i+1} \I^{\Lambda}$ else. \end{enumerate} \end{Prop} \begin{proof} (1) is a special case of (3) of Proposition \ref{Prop:map_ideal_prop}. (2) follows from Proposition \ref{Prop:up_dag_prop}, compare with the proof of (1) in Lemma \ref{Lem:ideals_next}.
Let us prove (3). Since $(\J_{i})_{\dagger,x_i}$ has finite codimension, we see that it coincides with the maximal ideal in $\underline{\mathcal{A}}^{\otimes i}$. So $(\J_j)_{\dagger,x_i}\subset (\J_i)_{\dagger,x_i}$ for $j<i$. It follows that $\J_j\subset [(\J_j)_{\dagger,x_i}]^{\dagger_\mathcal{A},x_i}\subset [(\J_i)_{\dagger,x_i}]^{\dagger_\mathcal{A},x_i}=\J_i$.
Let us prove (4). The functor $\bullet_{\dagger,x_q}$ is faithful. Indeed, otherwise we have a HC bimodule $\M$ with $\operatorname{V}(\M)\cap \mathcal{L}_q=\varnothing$. But $\M_{\dagger,x}$ has to be nonzero finite dimensional for some $x$ and this is only possible when $x\in \mathcal{L}_i$ for some $i$. But $\mathcal{L}_q\subset \overline{\mathcal{L}}_i$ for all $i$ that shows faithfulness. Since $\bullet_{\dagger,x_q}$ is faithful and exact, it follows that it embeds the lattice of the ideals in $\mathcal{A}$ into that in $\underline{\mathcal{A}}^{\otimes q}$. We claim that this implies that every ideal in $\mathcal{A}$ is semiprime. Indeed, the functor $\bullet_{\dagger,x_q}$ is, in addition, tensor and so preserves products of ideals. Our claim follows from (2) of Lemma \ref{Lem:ideals_next}. But every prime ideal in $\mathcal{A}$ is some $\J_i$, this is proved analogously to (1) of Lemma \ref{Lem:ideals_next}. Since the ideals $\J_i$ form a chain, any semiprime ideal is prime and so coincides with some $\J_i$.
Let us prove (5). We will deduce that from the behavior of $\bullet_{\dagger,x}$ on the associated varieties. We have an action of $\mathfrak{S}_j$ on $\M(n',r)^j$ by permuting factors. The action is induced from $N_{G}(G_{\tilde{x}})$, where $\tilde{x}$ is a point from the closed $G$-orbit lying over $x$. It follows that the intersection of any leaf with the slice is $\mathfrak{S}_j$-stable. The associated variety $\operatorname{V}(\underline{\mathcal{A}}^{\otimes j}/(\J_i)_{\dagger,x_j})$ is the union of some products with factors $\{\operatorname{pt}\}$ and $\bar{\M}(n',r)$, where, for the dimension reasons, $\bar{\M}(n',r)$ occurs $j-i$ times. Because of the $\mathfrak{S}_j$-symmetry, all products occur. Now we deduce the required formula for $(\J_i)_{\dagger,x_j}$ from the description of the two-sided ideals in $\bar{\mathcal{A}}^{\otimes j}$. This description shows that for each associated variety there is at most one two-sided ideal. \end{proof}
\subsection{Computation of Tor's}\label{SS_tor_comput} Here we consider the case when the denominator of $\lambda$ is $n$ and $\lambda$ is regular. Set $\mathcal{A}:=\bar{\mathcal{A}}_\lambda(n,r)$ and let $\J$ denote a unique proper ideal in this algebra. We want to establish (iii).
\begin{Prop}\label{Prop:Tor} We have $\operatorname{Tor}_i^{\mathcal{A}}(\mathcal{A}/\J,\mathcal{A}/\J)=\mathcal{A}/\J$ if $i$ is even and $0\leqslant i\leqslant 2rn-2$, and $\operatorname{Tor}_i^{\mathcal{A}}(\mathcal{A}/\J,\mathcal{A}/\J)=0$ else. \end{Prop} The proof closely follows that of \cite[Lemma 7.4]{BL} but we need to modify some parts of that argument. \begin{proof} Thanks to the translation equivalences it is enough to prove the claim when $\lambda$ is Zariski generic.
Let $L$ denote a unique finite dimensional irreducible $\mathcal{A}$-module. What we need to show is that $\operatorname{Tor}_i^{\mathcal{A}}(L^*,L)=\C$ if $i$ is even and $0\leqslant i\leqslant 2nr-2$ and that the Tor vanishes otherwise. We claim that $\operatorname{Tor}_i^{\mathcal{A}}(L^*,L)=\operatorname{Ext}^i_{\mathcal{A}}(L,L)^*$. Knowing that, one can argue as follows. By Lemma \ref{Lem:Ext_coinc}, $\operatorname{Ext}^i_{\mathcal{A}}(L,L)= \operatorname{Ext}^i_{\mathcal{O}(\mathcal{A})}(L,L)$. The block in $\mathcal{O}(\mathcal{A})$ containing $L$ was described in Theorem \ref{Thm:catO_str}. In this block, we have $\operatorname{Ext}^i(L,L)=\C$ when $i$ is even, $0\leqslant i\leqslant 2nr-2$, and $\operatorname{Ext}^i(L,L)=0$ otherwise. To see this one considers the so called BGG resolution, see \cite{BEG}, for the first copy of $L$ and its analog with costandard objects for the second copy.
So we need to show that $\operatorname{Tor}_i^{\mathcal{A}}(L^*,L)^*=\operatorname{Ext}^i_{\mathcal{A}}(L,L)$. The proof is similar to that of \cite[Theorem A.1]{BLPW}. Namely, we consider the objects $\Delta:=\mathcal{A}/\mathcal{A}\mathcal{A}_{>0}, \nabla^\vee:=\mathcal{A}/\mathcal{A}_{<0}\mathcal{A}$ and let $\nabla$ be the restricted dual of $\nabla^\vee$. Then, since $\lambda$ is Zariski generic, we see that $\Delta$ is the sum of the standard objects in $\mathcal{O}(\mathcal{A})$, while $\nabla$ is the sum of all costandard objects in $\mathcal{O}(\mathcal{A})$. Then, as we have checked in the appendix to \cite{BLPW}, we have $\operatorname{Tor}^{\mathcal{A}}_i(\nabla^\vee,\Delta)=\Ext_{\mathcal{A}}^i(\Delta,\nabla)=0$ for $i>0$ and moreover $(\nabla^\vee\otimes_{\mathcal{A}}\Delta)^*=\Hom_{\mathcal{A}}(\Delta,\nabla)$ (an equality of $\C \bar{\M}^\theta(n,r)^{T\times\C^\times}$-bimodules). Taking a resolution $P$ of the first copy of $L$ in $\operatorname{Ext}^i_{\mathcal{A}}(L,L)$ by direct summands of $\Delta$ (which exists because of the structure of $\mathcal{O}(\mathcal{A})$) and of the second copy by direct summands of $\nabla$, denote this resolution by $Q$, we get $\operatorname{Ext}^i_{\mathcal{A}}(L,L)=H_i(\operatorname{Hom}_{\mathcal{A}}(P,Q))=H_i(Q^\vee\otimes_{\mathcal{A}}P)^*=\operatorname{Tor}_i^{\mathcal{A}}(Q^\vee,P)^*= \operatorname{Tor}^i_{\mathcal{A}}(L^*,L)^*$. \end{proof}
\subsection{Faithfulness}\label{SS_faith} Now we are going to establish (iv) for $x$ and $\lambda$ specified above. Let $\mathcal{L}$ be the leaf containing $x$ and $\mathcal{L}'$ be the leaf corresponding to the decomposition $r^0\oplus (r^1)^{\oplus n}$, where $r^1$ is an irreducible representation of dimension $\delta$. Clearly, $\mathcal{L}'\subset \overline{\mathcal{L}}$.
We are going to prove that $\mathcal{L}$ is contained in the associated variety of any HC $\mathcal{A}_\lambda(v,w)$-bimodule $\M$ (or $\mathcal{A}_{\lambda'}(v,w)$-$\mathcal{A}_\lambda(v,w)$-bimodule or $\mathcal{A}_\lambda(v,w)$-$\mathcal{A}_{\lambda'}(v,w)$-bimodule; thanks to Proposition \ref{Prop:up_dag_prop}, a direct analog of \cite[Corollary 5.19]{BL} holds so that the associated variety of any HC bimodule $\mathcal{A}_{\lambda'}(v,w)$-$\mathcal{A}_\lambda(v,w)$-bimodule coincides with those of $\mathcal{A}_{\lambda'}(v,w)/\J_\ell, \mathcal{A}_\lambda(v,w)/\J_{r}$, where $\J_{\ell},\J_r$ are the left and right annihilators). This is equivalent to saying that $\bullet_{\dagger,x}$ is faithful. The scheme of the proof is as follows: \begin{itemize} \item[(a)] We first show that $\mathcal{L}'\subset \operatorname{V}(\M)$. We do this by showing that $\M_{\dagger,y}$ cannot be finite dimensional nonzero for $y$ from a leaf $\mathcal{L}_0$ such that $\mathcal{L}'\not\subset\overline{\mathcal{L}}_0$. \item[(b)] From $\mathcal{L}'\subset \operatorname{V}(\M)$ we deduce that $\mathcal{L}\subset \operatorname{V}(\M)$. \end{itemize}
Let us deal with (a). As in the proof of \cite[Lemma 7.10]{BL}, it is enough to show that the slice algebra
$\underline{\mathcal{A}}$ corresponding to $y$ has no finite dimensional representations. So let us analyze the structure of the leaves that contain $\mathcal{L}'$ in their closure. For a partition $\mu=(\mu_1,\ldots,\mu_k)$ with $|\mu|<n$, we write $\mathcal{L}(\mu)$ for the leaf corresponding to the decomposition of the form $r^0\oplus (r^1)^{\oplus \mu_1}\oplus\ldots\oplus (r^k)^{\oplus\mu_k}$. From \cite[Theorem 1.2]{CB_geom} it follows that $\mathcal{L}'=\mathcal{L}(n)\subset \overline{\mathcal{L}(\mu)}$. There is a natural surjection from the set of leaves in $\bar{\M}(n,r)$ (this is precisely the slice to $\mathcal{L}'$) to the set of leaves in $\M(v,w)$ whose closure contains $\mathcal{L}'$.
As in the proof of \cite[Lemma 7.10,(1)]{BL}, we need to prove that, for a generic $p\in \ker\delta$, the variety $\underline{\bar{\M}}_p(\underline{v},\underline{w})$ has no single point symplectic leaves as long as the corresponding symplectic leaf $\mathcal{L}$ is different from $\mathcal{L}(\mu)$. The latter is equivalent to the condition that the decomposition of $v$ defining $\mathcal{L}$ contains real roots. So let this decomposition be $v=v^0+\nu_1\delta+\ldots+\nu_\ell\delta+\sum_{i\in Q_0}m_i\alpha_i$. The slice variety $\underline{\bar{\M}}_p(\underline{v},\underline{w})$ is the product $\prod_\ell \bar{\M}(\nu_i, r)\times \M(v',w')$, where $v'=(m_i)_{i\in Q_0}$ and $w'$ is given by $w'_i=w_i-(v^0,\alpha_i)$. We remark that $\dim \M(v',w')>0$. Indeed, we have \begin{align*} \dim \M(v',w')&=2\sum_{i\in Q_0}(w_i-(v^0,\alpha_i))m_i-(\sum_i m_i\alpha_i, \sum_i m_i\alpha_i)=\\&=2w\cdot (v-v^0)-2(v^0,v-v^0)-(v-v^0,v-v^0)=\\ &=2w\cdot (v-v^0)-2(v,v-v^0)+(v-v^0,v-v^0). \end{align*} But $w\cdot (v-v^0)\geqslant (v,v-v^0)$ because $\nu$ is dominant. We also have $(v-v^0,v-v^0)\geqslant 0$ with the equality only if $v-v^0=k\delta$. But if the last equality holds, then we have $(v,v-v^0)=0$, while $w\cdot \delta$ is always positive. The quiver defining $\M_p(v',w')$ has no loops so that variety cannot have single point leaves. So we have proved that $\underline{\mathcal{A}}$ cannot have a finite dimensional representation. This implies that $\mathcal{L}'\subset \operatorname{V}(\M)$.
Now let us show that $\mathcal{L}\subset \operatorname{V}(\M)$. The slice algebra corresponding to $\mathcal{L}'$ is $\underline{\mathcal{A}}'= \bar{\mathcal{A}}_{\langle \lambda,\delta\rangle}(n,r)$. It follows that $\operatorname{V}(\M_{\dagger,x'})$ contains the leaf corresponding to the partition $\mu=(n'^{q})$. It follows that $\operatorname{V}(\M)$ contains $\mathcal{L}(\mu)$.
\subsection{Affine wall-crossing functor and counting}\label{SS_aff_WC} Using (i)-(iv) proved above one gets a direct analog of \cite[Theorem 7.2]{BL}. Let us introduce some notation. Let $\theta,\theta'$ be two stability conditions separated by $\ker\delta$. Let $\lambda,\lambda'$ be two parameters with $\lambda'-\lambda\in \mathbb{Z}^{Q_0}$, $\langle \lambda,\delta\rangle$ has denominator $n'\leqslant n$, and $(\lambda,\theta)$, $(\lambda',\theta')$ satisfy the abelian localization. Set $q=\lfloor n/n'\rfloor$.
\begin{Thm}\label{Thm:perv} There are chains of two-sided ideals $\{0\}\subsetneq \J_{q}\subsetneq \J_{q-1}\subsetneq \ldots \subsetneq \J_1\subsetneq \mathcal{A}_\lambda(v,w)$ and $\{0\}\subsetneq \J'_q\subsetneq \J'_{q-1}\subsetneq\ldots \subsetneq \J'_1\subsetneq \mathcal{A}_{\lambda'}(v,w)$ with the following properties. Let $\mathcal{C}_i$ be the subcategory of all modules in $\mathcal{A}_{\lambda}(v,w)\operatorname{-mod}$ annihilated by $\J_{q+1-\lfloor i/(rm-1)\rfloor}$ (this is a Serre subcategory by a direct analog of (1) of \cite[Theorem 7.1]{BL}) and let $\mathcal{C}'_i\subset \mathcal{A}_{\lambda'}(v,w)\operatorname{-mod}$ be defined analogously. Then the following is true: \begin{enumerate} \item $\mathfrak{WC}_{\theta\rightarrow \theta'},\mathfrak{WC}_{\theta'\rightarrow \theta}$ are perverse equivalences with respect to these filtrations inducing mutually inverse bijections between simples. \item For a simple $S\in \mathcal{C}_{j(rm-1)}\setminus\mathcal{C}_{j(rm-1)+1}$, the simple $S'$ is a quotient of $H_{j(rm-1)}(\mathfrak{WC}_{\theta\rightarrow \theta'}S)$. \item The bijection $S\mapsto S'$ preserves the associated varieties of the annihilators. \end{enumerate} \end{Thm}
Similarly to \cite[Section 8]{BL}, this theorem implies Theorem \ref{Thm:counting}.
\end{document} | arXiv |
Business Economics Price elasticity of demand
The demand for space heaters is Q=250-P+2COOL, where COOL is the absolute value of the difference...
The demand for space heaters is Q=250-P+2COOL, where COOL is the absolute value of the difference between the average overnight low temperature and 40F. Assume that the average overnight low this month is 40F. When the price of space heaters is P=$50, the price elasticity of demand is what?
The elasticity of demand:
The elasticity of demand is the ratio of the percentage change in quantity demanded to the percentage change in the price of a product. As the price rises then quantity demanded falls and as the price falls then the quantity demanded rises. It shows that the demand curve is downward sloping and it has a negative slope.
{eq}\begin{array}{l} {E_d} = \frac{P}{Q} \times \frac{{\partial Q}}{{\partial P}}\\ As\,we\,know\\ Q = 250 - P + 2(COOL)\\ Q = 250 - 50 +...
Price Elasticity of Demand: Definition, Formula & Example
Chapter 3 / Lesson 54
Price elasticity of demand describes the response of consumers to changing prices of goods and services. Review the formula for price elasticity of demand, learn how certain products can be deemed elastic or inelastic depending on consumer sensitivity, and understand the importance of the concept.
How does the reluctance to sell antique brass...
The price elasticity of demand for a good or...
Which of the following statements is true? a)...
Which of the following statements is true? a. ...
The demand for a product is given by q =...
Many researchers believe the falling price of...
If the price elasticity is -4.0 and a 12%...
Suppose the government places a per-barrel unit...
1 Rain spoils the strawberry crop. As a result,...
Assume an analyst has been hired to estimate...
The price elasticity of demand for a good...
True or false? If the demand for illegal drugs...
What product is considered to have a very high...
"We should impose a 20 percent luxury tax on...
An electronics company determines the demand...
C(x) = 10x^2 + 100x + 13,000 ; p= 2000 - x -...
How would the market for gasoline be affected...
A city has build a bridge over a river and it...
Suppose you are the administrator in charge of...
The price elasticity for corn from a particular...
Price Elasticity of Supply | What is Elasticity of Supply?
Understand what elasticity of supply is. Learn more about price elasticity of supply. Know about elastic and inelastic supply with some elastic supply examples.
Price Elasticity of Demand in Microeconomics
In microeconomics, the principle of price elasticity of demand is important to understand. Learn the definition of price elasticity of demand, understand the formula and its categories, and see some calculation examples.
Elastic Demand: Definition, Formula & Examples
In this lesson, you will be introduced to the concept of an elastic demand and how to determine if the demand is elastic. Two methods will be presented along with examples.
What Is an Income Statement? - Purpose, Components & Format
An income statement is one of the most basic but necessary accounting documents for any company. Learn what income statements are, their purpose, and examine their components of revenue and expenses.
Income Elasticity of Demand in Microeconomics
In microeconomics, the principle of income elasticity of demand, which illustrates the relationship between demand and income, is important to understand the field as a whole. In this lesson, dive into the definition of income elasticity of demand and understand how it impacts normal goods, necessities, and inferior goods.
Legal Monopoly: Definition & Examples
A legal monopoly, where a single entity provides a given service with no competition, occurs when governments allow businesses to hold the monopoly so that they may monitor and regulate related activities, rates, and policies. Learn more about the definition of a legal monopoly and discover the factors that affect legal monopolies.
Normal vs. Inferior Goods | Overview, Examples & Demand Curve
Discover what a normal good is, know the definition of an inferior good and see examples of normal goods and inferior goods. Read about the demand curves for inferior goods and normal goods.
The Elasticity of Demand: Definition, Formula & Examples
The elasticity of demand is the percent change in quantity demanded in every one percent change in price (ceteris paribus). Review this definition and calculate the examples for arc elasticity and price-point elasticity using the formulas provided.
Price Elasticity of Demand in the Hospitality & Tourism Industry
Price elasticity of demand (PED) measures how responsive consumer activity and demand are to price changes. Identify the formula and application of this concept specifically in the hospitality and tourism industry.
Positive vs. Normative Economics
Understand the difference between postive vs normative statements. Using positive and normative economics examples learn about positive economic analysis.
Elasticity in Economics | Symbol, Theory & Formula
Learn the definition of elasticity in economics. Understand the elasticity formula, the ways used to measure elasticity, and who created the theory of elasticity.
Short-Run Costs vs. Long-Run Costs in Economics
Short-run production concentrates on the ability of a company to complete current contracts while long-run production focuses on signing new contracts. Learn more about short-run production and long-run production and how they translate into short-run and long-run costs and the differences between variable costs and fixed costs.
Law of Demand in Economics | Basic Principle & Examples
What is demand? What is the law of demand? Learn the definition of demand in economics and the basic principle of demand.
Natural Monopoly Examples | What is a Natural Monopoly?
Explore natural monopolies. Learn the definition of natural monopoly and understand how it functions. See characteristics of natural monopolies with examples.
Unit Elastic in Economics | Demand Curve & Examples
Learn the definition of unit elastic in economics. Understand what unit elastic means in terms of supply and demand with the help of graphs and relevant examples.
Marginal Social Costs & Marginal Social Benefits
Marginal social costs and befits are measured to evaluate the potential impact of implementing specific programs. Learn how each of these is calculated using specific equations, and how they directly impact decision making.
Supply in Economics | Concept & Factors
Learn the definition of "supply" in economics. Know what supply is, the concepts of supply, factors affecting supply, and changes in supply with examples.
Giffen Goods Demand Curve & Examples | What is a Giffen Good?
Explore Giffen goods. Learn the definition of a Giffen good and understand the different conditions that it needs to meet. Discover examples of Giffen goods on a Giffen good demand curve.
Causes of Supply and Demand Changes in Microeconomics
Causes of supply and demand changes in microeconomics include factors such as market forces and equilibrium price. Explore the definition of market forces and learn about demand force and supply force along a curve.
Factors that Affect Elasticity of Supply
The elasticity of supply refers to the responsiveness of the supply of a commodity to the changes in its price. Learn more about elasticity and discover the factors that affect the elasticity of supply.
Financial Accounting: Help and Review
Financial Accounting: Homework Help Resource
DSST Principles of Supervision: Study Guide & Test Prep
Introduction to Humanities: Help and Review
Business 100: Intro to Business
CSET Business Subtest I (175): Practice & Study Guide
Business 110: Business Math
Principles of Marketing: Certificate Program
Principles of Management: Certificate Program
UExcel Introduction to Macroeconomics: Study Guide & Test Prep
Introduction to Financial Accounting: Certificate Program
Financial Accounting: Tutoring Solution
Information Systems: Help and Review
Information Systems: Tutoring Solution
DSST Organizational Behavior: Study Guide & Test Prep
Introduction to Organizational Behavior: Certificate Program
UExcel Organizational Behavior: Study Guide & Test Prep
DSST Introduction to Business: Study Guide & Test Prep
Introduction to Business: Certificate Program
UExcel Business Law: Study Guide & Test Prep | CommonCrawl |
2-sided
In mathematics, specifically in topology of manifolds, a compact codimension-one submanifold $F$ of a manifold $M$ is said to be 2-sided in $M$ when there is an embedding
$h\colon F\times [-1,1]\to M$
with $h(x,0)=x$ for each $x\in F$ and
$h(F\times [-1,1])\cap \partial M=h(\partial F\times [-1,1])$.
In other words, if its normal bundle is trivial.[1]
This means, for example that a curve in a surface is 2-sided if it has a tubular neighborhood which is a cartesian product of the curve times an interval.
A submanifold which is not 2-sided is called 1-sided.
Examples
Surfaces
For curves on surfaces, a curve is 2-sided if and only if it preserves orientation, and 1-sided if and only if it reverses orientation: a tubular neighborhood is then a Möbius strip. This can be determined from the class of the curve in the fundamental group of the surface and the orientation character on the fundamental group, which identifies which curves reverse orientation.
• An embedded circle in the plane is 2-sided.
• An embedded circle generating the fundamental group of the real projective plane (such as an "equator" of the projective plane – the image of an equator for the sphere) is 1-sided, as it is orientation-reversing.
Properties
Cutting along a 2-sided manifold can separate a manifold into two pieces – such as cutting along the equator of a sphere or around the sphere on which a connected sum has been done – but need not, such as cutting along a curve on the torus.
Cutting along a (connected) 1-sided manifold does not separate a manifold, as a point that is locally on one side of the manifold can be connected to a point that is locally on the other side (i.e., just across the submanifold) by passing along an orientation-reversing path.
Cutting along a 1-sided manifold may make a non-orientable manifold orientable – such as cutting along an equator of the real projective plane – but may not, such as cutting along a 1-sided curve in a higher genus non-orientable surface, maybe the simplest example of this is seen when one cut a mobius band along its core curve.
References
1. Hatcher, Allen (2000). Notes on basic 3-manifold topology (PDF). p. 10.
| Wikipedia |
\begin{document}
\title{Wild ramification, the nearby cycle complexes, \\and the characteristic cycles of $\ell$-adic sheaves} \begin{abstract} We prove a purely local form of a result of Saito and Yatagawa. They proved that the characteristic cycle of a constructible \'etale sheaf is determined by wild ramification of the sheaf along the boundary of a compactification. But they had to consider ramification {\it at all the points} of the compactification. We give a pointwise result, that is, we prove that the characteristic cycle of a constructible \'etale sheaf around a point is determined by wild ramification at the point. The key ingredient is to prove that wild ramification of the stalk of the nearby cycle complex of a constructible \'etale sheaf at a point is determined by wild ramification at the point.
\end{abstract} \section{Introduction}\label{intro} Saito developed a theory of the characteristic cycles of constructible \'etale sheaves on a variety over a perfect field in \cite{S}. Let $k$ be a perfect field of characteristic $p$ and $X$ be a smooth scheme purely of dimension $n$ over $k$. For a constructible $\Lambda$-sheaf $\mathcal F$, where $\Lambda$ is a finite field, he defined an $n$-cycle $\mathop{CC}\nolimits(\mathcal F)$ on the cotangent bundle $T^*X$.
To define the characterictic cycle, he used the singular support $\SS(\mathcal F)$ defined by Beilinson \cite{B}, which is a closed conical subset of $T^*X$ of purely of dimension $n$. The characteristic cycle $\mathop{CC}\nolimits(\mathcal F)$ is defined as a unique cycle supported on $\SS(\mathcal F)$ which computes, as follows, the total dimension of the vanishing cycle complex
$R\phi_u(\mathcal F|_W,f)$ for a function $f:W\to \mathbb A^1_k$ on an open subscheme $W\subset X$ with an isolated characteristic point $u\in W$ with respect to $\SS(\mathcal F)$; \begin{eqnarray}\label{Milnor}
-\mathop{\dim\mathrm{tot}}\nolimits R\phi_u(\mathcal F|_W,f) =(\mathop{CC}\nolimits(\mathcal F),df)_{T^*W,u}, \end{eqnarray} where the right hand side is the intersection multiplicity at the point over $u$ \cite[Theorem 5.9, Definition 5.10]{S} (in this explanation we assumed for simplicity that $k$ is an infinite field). This equality is called the Milnor formula.
In \cite[Theorem 0.1]{SY}, Saito and Yatagawa proved that the characteristic cycle of a constructible \'etale sheaf is determined by wild ramification of the sheaf along a compactification. More precisely, suppose that we have a smooth variety $U$ over $k$ with a smooth compactification $X$ and an $\Lambda$-sheaf $\mathcal G$ and an $\Lambda'$-sheaf $\mathcal G'$ on $U_{\text{\'et}}$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics distinct from that of $k$, which are locally constant constructible. Then we have $\mathop{CC}\nolimits(j_!\mathcal G)=\mathop{CC}\nolimits(j_!\mathcal G')$ if $\mathcal G$ and $\mathcal G'$ ``have the same wild ramification'', where $j$ is the open immersion $U\to X$.
We note that they have to see ramification {\it at all points} of the compactification to deduce the equality of the characteristic cycles. This is because their proof relies on global results, such as (a variant of) a result of Deligne and Illusie \cite[Th\'eor\`eme 2.1]{I} which states that if $\mathcal G$ and $\mathcal G'$ have the same wild ramification, then they have the same Euler characteristics; $\chi_c(U_{\bar k},\mathcal G)=\chi_c(U_{\bar k},\mathcal G')$. However, since both of ramification and characteristic cycles are of local nature, it is more preferable to work at each point. Here is a pointwise definition of ``having the same wild ramification'', which does not seem to be given in the literature: Let $x\in X$ be a point and $X_{(\bar x)}$ denotes the strict henselization at a geometric point $\bar x$ over $x$. We say $\mathcal G$ and $\mathcal G'$ {\it have the same wild ramification at} $x$ if
$\mathcal G|_{U\times_XX_{(\bar x)}}$ and $\mathcal G'|_{U\times_XX_{(\bar x)}}$ have the same wild ramification in the following sense (it is equivalent to having the same wild ramification over $X_{(\bar x)}$ in the sense of Definition \ref{swr} in the text).
\begin{defn}[{c.f.\ \cite{I}, \ \cite[Definition 2.2.1]{V}, and \cite[Definition 5.1]{SY}}] \label{pointwise} Let $X$ be the spectrum of a strictly henselian normal local ring and $U$ be a dense open subscheme of $X$. Let $\Lambda$ and $\Lambda'$ be finite fields of characteristics invertible on $X$ and let $\mathcal G$ and $\mathcal G'$ be an $\Lambda$-sheaf and an $\Lambda'$-sheaf respectively on $U_{\text{\'et}}$ which are locally constant and constructible.
We say $\mathcal G$ and $\mathcal G'$ {\it have the same wild ramification} if there exists a proper $X$-scheme $X'$ which is normal and contains $U$ as a dense open subscheme such that we have \begin{eqnarray*} \dim_{\Lambda}(\mathcal G_y)^\sigma = \dim_{\Lambda'}(\mathcal G'_y)^\sigma \end{eqnarray*} for every $\sigma\in\pi_1(U,y)$, with $y$ being some geometric point, which is of $p$-power order and ``ramified'' at some point on $X'$ (i.e., has a fixed geometric point on the normalization of $X'$ in $V$ for every Galois cover $V$ of $U$). \end{defn} Here we emphasize that we take a modification $X'$ of $X$ to get a reasonable definition. A naive definition of ``having the same wild ramification'' without taking modifications $X'$ of $X$ is unreasonably strong (see Remark \ref{unreasonable} and Section \ref{ex}). This is essentially because the inertia group in higher dimension can be too big.
Our main result is the following: \begin{thm}[Corollary \ref{swr implies same cc}] \label{swr same cc for lisse} Let $X$ be a smooth variety, $j:U\to X$ be an open immersion, $\Lambda$ and $\Lambda'$ be finite fields of characteristics distinct from that of $k$, and let $\mathcal G$ and $\mathcal G'$ be an $\Lambda$-sheaf and an $\Lambda'$-sheaf respectively on $U_{\text{\'et}}$ which are locally constant and constructible. Let $x\in X$ be a point. We assume that $\mathcal G$ and $\mathcal G'$ have the same wild ramification at $x$. Then there exists an open neighborhood $W\subset X$ of the point $x$ such that
$\mathop{CC}\nolimits(j_!\mathcal G|_W)=\mathop{CC}\nolimits(j_!\mathcal G'|_W)$. \end{thm}
To prove Theorem \ref{swr same cc for lisse}, we show a general property of the nearby cycle functor, which itself is interesting; \begin{thm}[Theorem \ref{rpsi have swr}] \label{rpsi lisse} Let $S$ be a henselian trait with generic point $\eta=\mathop{\mathrm{Spec}}\nolimits K$ and closed point $s$. Let $X$ an $S$-scheme of finite type and $j:U\to X_\eta$ an open immersion. Let $\Lambda$ and $\Lambda'$ be finite fields of characteristics invertible on $S$ and let $\mathcal G$ and $\mathcal G'$ be an $\Lambda$-sheaf and an $\Lambda'$-sheaf respectively on $U_{\text{\'et}}$ which are locally constant and constructible. Let $x\in X_s$ be a point of the special fiber and $\bar x$ be a geometric point over $x$. We assume that $\mathcal G$ and $\mathcal G'$ have the same wild ramification at $x$. Then the stalks $R\psi(j_!\mathcal G)_{\bar x}$ and $R\psi(j_!\mathcal G')_{\bar x}$ viewed as virtual representations of the inertia group of $K$, have the same wild ramification, i.e, for every $\sigma$ in the wild inertia group, we have \begin{eqnarray*} \sum_i(-1)^i\dim_\Lambda(R^i\psi(j_!\mathcal G)_{\bar x})^\sigma =\sum_i(-1)^i\dim_{\Lambda'}(R^i\psi(j_!\mathcal G')_{\bar x})^\sigma. \end{eqnarray*} \end{thm} Note that we do not assume that $S$ is of equal-characteristic.
Since the characteristic cycle is defined using the Milnor formula (\ref{Milnor}), it is straightforward to deduce Theorem \ref{swr same cc for lisse} from Theorem \ref{rpsi lisse}.
We also give a variant, due to Takeshi Saito, on a relation with restrictions to curves: \begin{cor}[Saito, Corollary \ref{curve}] \label{curve lisse} Let $X$ be a smooth variety, $j:U\to X$ be an open immersion, $\Lambda$ and $\Lambda'$ be finite fields of characteristics distinct from that of $k$, and let $\mathcal G$ and $\mathcal G'$ be an $\Lambda$-sheaf and an $\Lambda'$-sheaf respectively on $U_{\text{\'et}}$ which are locally constant and constructible. Let $x$ be a closed point of $X$. Assume that $\mathcal G$ and $\mathcal G'$ have the same rank, and that for every morphism $g:C\to X$ from a smooth curve $C$ with a closed point $v\in C$ lying over $x$, we have an equality $\mathop{\mathrm{Sw}}\nolimits_v(g^*j_!\mathcal G)=\mathop{\mathrm{Sw}}\nolimits_v(g^*j_!\mathcal G')$. Then there exists an open neighborhood $W\subset X$ of the point $x$ such that
$\mathop{CC}\nolimits(j_!\mathcal G|_W)=\mathop{CC}\nolimits(j_!\mathcal G'|_W)$.
\end{cor}
We note that it is elementary to see that the assumption in Corollary \ref{curve lisse} is weaker than that in Theorem \ref{swr same cc for lisse}, i.e, if $\mathcal G$ and $\mathcal G'$ have the same wild ramification at $x$, then they have the same rank and the same Swan conductor at $x$ after restricting to any curve. We show that these conditions are in fact equivalent (Theorem \ref{swr sc var}). The equivalence of these conditions is the main theme of \cite{K}, in which the case of sheaves on a surface is proved using (a known case of) resolution of singularities (\cite[Theorem 3.2]{K}). We instead use purely inseparable local uniformization due to Temkin, which is a suggestion by Haoyu Hu. Let us note that Corollary \ref{curve lisse} is a refinement of \cite[Corollary 4.7]{K}, where we had to consider ramification at all points on a compactification.
Before this article was written, Haoyu Hu taught the author a proof of this result in the case of rank 1 sheaves on a surface. His proof relies on ramification theory and is quite different from the proof in this article. In an earlier version of this article, the corollary above was stated only in the case of sheaves on a surface, because the author did not know how to prove that the assumption in Theorem \ref{swr same cc for lisse} and that in Corollary \ref{curve lisse} are equivalent in general. Takeshi Saito pointed out that we can remove the assumption on dimension in the statement in the earlier version, using the theory of nearby cycle complexes over general bases. His proof relies on Theorem \ref{rpsi lisse} and the equivalence of the two conditions in the case of surfaces, but not on the equivalence in the general case.
We explain the idea of the proof of Theorem \ref{rpsi lisse}, which is the main part of this paper. For that purpose, we briefly recall an argument in \cite{I}. Let $U$ be a normal variety over an algebraically closed field and let $\mathcal G$ and $\mathcal G'$ be an $\Lambda$-sheaf and an $\Lambda'$-sheaf respectively on $U_{\text{\'et}}$ which are locally constant and constructible. In \cite{I}, it is proved that if $\mathcal G$ and $\mathcal G'$ have the same wild ramification, then we have $\chi_c(U,\mathcal G)=\chi_c(U,\mathcal G')$ \cite[Theorem 2.1]{I}, \footnote{ Though their formulation of ``having the same wild ramification'' is slightly stronger than ours, we can apply the same proof. For the detail, see \cite[Section 5]{SY}. } in the following way. First, taking a Galois \'etale cover $V\to U$
such that $\mathcal G|_V$ is a constant sheaf, which we denote by $M$ and view as a representation of the Galois group $G$, they deduced the intertwining formula \begin{eqnarray}\label{DI int}
\chi_c(U,\mathcal G)=\frac{1}{|G|}\sum_{g\in G}\mathop{\mathrm{Tr}}\nolimits(g,R\Gamma_c(V,\qq_\ell))\cdot\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M), \end{eqnarray} from a canonical isomorphism $ R\Gamma_c(U,\mathcal G) \cong R\Gamma^G( R\Gamma_c(V,\Lambda)\otimes^LM )$ and ``projectivity'', i.e., the fact that $R\Gamma_c(V,\Lambda)$ is a perfect complex of $\Lambda[G]$-modules. They proved that, in the intertwining formula (\ref{DI int}), only terms with $g$ being ``wildly ramified'' contribute, more precisely, if $\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,R\Gamma_c(V,\Lambda))$ is nonzero, then the following two conditions hold: \begin{enumerate} \item\label{g has fixed pt} $g$ has a fixed point in every $G$-equivariant compactification of $V$, \item\label{g is wild} $g$ is of $p$-power order. \end{enumerate} The assertion 1 follows from
the Lefschetz fixed point formula. The key ingredients for 2 are $\ell$-independence of the trace and the ``projectivity'': The ``projectivity'' of $\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,R\Gamma_c(V,\zz_\ell))$ as a $\zz_\ell[G]$-complex implies, by Brauer theory, that the trace of an $\ell$-singular $g$ is zero, while it is independent of $\ell$, hence nonzero only for $g$ of $p$-power order.
By a similar argument, Vidal proved that having the same wild ramification is preserved by the direct image \cite[Th\'eor\`eme 0.1]{Vnodal} (see also \cite{V}, \cite{Yat}).
We prove Theorem \ref{rpsi lisse} by a similar argument with the Euler characteristic replaced by the nearby cycle complex.
We deduce, in Section \ref{general properties}, an intertwining formula for $R\psi(j_!\mathcal G)_{\bar x}$; for an element $\sigma$ of the wild inertia group, \begin{eqnarray}\label{int} \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(\sigma,(R\psi j_!\mathcal G)_{\bar x})
=\frac{1}{|G|}\sum_{g\in G} \mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_{\bar x}) \cdot \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M) \end{eqnarray} from a similar canonical isomorphism $R\psi j_!\mathcal G \cong R\Gamma^G(R\psi(j_!h_{*}\Lambda)\otimes_\Lambda^LM), $ and the projectivity of $(R\psi(j_!h_{*}\qq_\ell))_{\bar x}$, where $h$ is the finite \'etale morphism $V\to U$.
To do this, we need some Brauer theory for profinite groups established in \cite{V} and recalled in Section \ref{Brauer}.
We prove that only terms with $g$ being ``wildly ramified'' contribute in our intertwining formula (\ref{int}). To this end, we establish, in Section \ref{fixed}, $\ell$-independence of the trace $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_{\bar x})$ and existence of a fixed geometric point when the trace is nonzero. These are proved at the same time by an argument similar to that in \cite{O} or \cite{V} (c.f., \cite{SaTweightspec}, \cite{M}, \cite{Katoell}).
We briefly recall an argument in \cite{O}. In \cite[Theorem 2.4]{O}, Ochiai proved an $\ell$-independence result for a variety over a local field, more precisely, for a variety $X$ over a strictly henselian discrete valuation field $K$ and for an element $\sigma$ of the absolute Galois group of $K$, the alternating sum $\sum_q(-1)^q\mathop{\mathrm{Tr}}\nolimits(\sigma,H_c^q(X_{\overline K},\qq_\ell))$ is an integer independent of $\ell$ distinct from the residual characteristic of $K$.
When $X$ is smooth proper and has potentially semi-stable reduction, the weight spectral sequence of Rapoport and Zink \cite{RZ} for a strictly semi-stable model of a scalar extension of $X$ gives a geometric interpretation of the alternating sum,
and then the $\ell$-independence follows from the Lefschetz trace formula.
Since we do not know if every smooth proper variety has potentially semi-stable reduction, Ochiai used, instead of potential semi-stability, the existence of a semi-stable alteration due to de\ Jong and argued by induction on $\dim X$.
To make his strategy work in our setting,
we establish an analogue of the weight spectral sequence for cohomology with coefficient in the nearby cycle complex in Section \ref{weight}. This is similar to an analogue of the weight spectral sequence given by Mieda in \cite{M}.
We recall the notions of having the same wild ramification and having universally the same conductors for two \'etale sheaves in Section \ref{compare}. The latter means, roughly speaking, having the same ranks and the same conductors after restricting to any curve. We also recall the main result of \cite{K} stating that the two notions are equivalent for sheaves on a ``surface''. Further we remove the assumption on dimension in the case where the schemes in consideration are algebraic varieties. We deduce Theorem \ref{rpsi lisse} in Section \ref{swr section} and Theorem \ref{swr same cc for lisse} with its consequences in Section \ref{cor}. We include Takeshi Saito's proof of Corollary \ref{curve lisse} in Section \ref{another pf}. Section \ref{ell adic} is a complement on a result on $\ell$-adic systems of complexes which is used in \cite{V} and \cite{Vnodal}, but does not seem to be written explicitly. Finally, in Section \ref{ex}, we include an example, due to Takeshi Saito, explaining that we need to consider modifications in the definition of ``having the same wild ramification''.
\paragraph{Acknowledgment} The author would like to thank his advisor Takeshi Saito for reading the manuscript carefully, for giving the numerous helpful comments, and for showing how to remove the assumption on dimension in an earlier version of Corollary \ref{curve lisse}. He also thanks Haoyu Hu for suggesting to use Temkin's result to prove Theorem \ref{swr sc var} and for sharing a proof of Theorem \ref{curve lisse} in the case of rank 1 sheaves on a surface. The research was supported by the Program for Leading Graduate Schools, MEXT, Japan and also by JSPS KAKENHI Grant Number 18J12981.
\paragraph{Conventions} \begin{itemize} \item Let $S$ be a noetherian scheme and $X$ an $S$-scheme separated of finite type. A {\it compactification} of $X$ over $S$ means a proper $S$-scheme containing $X$ as a dense open subscheme. By Nagata's compactification theorem, a compactification of $X$ over $S$ exists. \item Let $X$ be a noetherian scheme and $\mathcal F$ a complex (of \'etale sheaves) of $\Lambda$-modules on $X$. We say $\mathcal F$ is {\it constructible} if the cohomology sheaves $\mathcal H^q(\mathcal F)$ are constructible for all $q$ and zero except for finitely many $q$. \end{itemize}
\section{Preliminary on Brauer theory} \label{Brauer} We first recall some notions from \cite{V}. \begin{defn}[{\cite[1.1]{V}}]\label{proj} Let $\Lambda$ be a commutative ring and $G$ a pro-finite group. A {\it continuous $\Lambda[G]$-module} is a $\Lambda$-module $M$ equipped with an action of $G$ such that the stabilizer of each element of $M$ is an open subgroup of $G$. A morphism of continuous $\Lambda[G]$-modules is a homomorphism of $\Lambda$-modules which is compatible with the actions of $G$. We denote the category of continuous $\Lambda[G]$-modules by $\mathrm{Mod}_c(\Lambda[G])$. \end{defn} \begin{conv} \begin{enumerate} \item\label{proj cont} For a continuous $\Lambda[G]$-module $M$, we say $M$ is {\it projective} if it is projective in the category $\mathrm{Mod}_c(\Lambda[G])$. \item For a bounded above complex $K$ of continuous $\Lambda[G]$-modules, we say $K$ is {\it perfect} if it is quasi-isomorphic to a bounded complex $P$ of continuous $\Lambda[G]$-modules with each $P^i$ projective in the sense of \ref{proj cont} and finitely generated over $\Lambda$. \end{enumerate} \end{conv}
Here is a characterization of projectivity for continuous $\Lambda[G]$-modules; \begin{lem}[{\cite[1.2]{V}}] \label{char of proj} Let $M$ be a continuous $\Lambda[G]$-module finitely generated over $\Lambda$. Then $M$ is projective if and only if there exists an open normal subgroup $H$ of $G$ whose supernatural order is invertible in $\Lambda$ which acts trivially on $M$ such that $M$ is projective as a $\Lambda[G/H]$-module. \end{lem}
As this characterization suggests, this notion of projectivity is useful only for pro-finite groups of the following type. \begin{defn}[{\cite[1.2]{V}}] \label{alm prime-to-ell} Let $\Lambda$ be a commutative ring and $G$ be a pro-finite group. We say $G$ is {\it admissible} relatively to $\Lambda$ if there exists an open subgroup of $G$ whose surnatural order is invertible in $\Lambda$. \end{defn}
The classical Brauer theory for representations of finite groups as in \cite{Se} is extended to the above framework. We recall a part of it from \cite{V}.
We denote by $K_\cdot(\Lambda[G])$ (resp.\ $K^\cdot(\Lambda[G])$) the Grothendieck group of the category of continuous $\Lambda[G]$-modules (resp.\ projective continuous $\Lambda[G]$-modules) finitely generated over $\Lambda$.
Let $\Lambda$ be a field of characteristic $\ell>0$ and $G$ a profinite group admissible relatively to $\Lambda$. For a continuous $\Lambda[G]$-module $M$ and an element $g\in G$ which is $\ell$-regular, i.e., the supernatural order of $g$ is prime-to-$\ell$, the Brauer trace $\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M)$ is defined to be the sum $\sum_\lambda[\lambda]$, where $\lambda$ runs over all eigenvalues of $g$ acting on $M$ and $[\cdot]$ denotes the Teichmular character $\Lambda^\times\to E^\times$ into a big enough $\ell$-adic field $E$. When $g\in G$ is not $\ell$-regular, we define $\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M)$ to be zero.
Let $\mathcal O$ be a complete discrete valuation ring which is mixed characteristic with residue field $\Lambda$ and maximal ideal $\mathfrak m$. Let $E$ denote the field of fractions of $\mathcal O$. Then we have natural maps \be\begin{xy}\xymatrix{ K^\cdot(\Lambda[G])&\ar[l]_{\mod\mathfrak m}K^\cdot(\mathcal O[G])\ar[r]^{e}& K_\cdot(E[G]) }\end{xy}\ee defined by $\otimes_\mathcal O\Lambda$ and $\otimes_\mathcal O E$ respectively.
These maps have the same properties as in the classical case (c.f. Corollary 3 in Chapter 14 and Theorem 36 in Chapter 16 of \cite{Se}):
\begin{lem}[{\cite[1.3]{V}}] \label{ell sing} \begin{enumerate} \item The map $\text{mod } \mathfrak m$ is an isomorphism. For every element $a\in K^\cdot(\mathcal O[G])$, we have $\mathop{\mathrm{Tr}}\nolimits(g,a)=\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,a\text{ \rm{mod} }\mathfrak m)$. \item Let $a$ be an element of $K_\cdot(E[G])$. Then $a$ lies in the image of the map $e:K^\cdot(\mathcal O[G])\to K_\cdot(E[G])$ if and only if we have $\mathop{\mathrm{Tr}}\nolimits(g,a)=0$ for every $g\in G$ of supernatural order divisible by $\ell$. \end{enumerate} \end{lem}
We will use the following lemma to establish an ``intertwining formula'' for the nearby cycle functor. \begin{lem}[{\cite[1.3]{V}, c.f. \cite[1.4.7]{I}}] \label{proj times mod} Let $\Lambda$ be a finite field of characteristic $\ell$ and $G$ be a pro-finite group which is admissible relatively to $\Lambda$. Let $K$ be a finite normal subgroup of $G$. Then, for $(a,b)\in K^\cdot(\Lambda[G])\times K_\cdot(\Lambda[G])$ and $g\in G/K$, we have \begin{eqnarray*}
\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,(ab)^K)=\frac{1}{|K|}\sum_{k\in K}\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(\tilde gk,a)\cdot\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(\tilde gk,b), \end{eqnarray*} where $\tilde g\in G$ is a representative of $g$. \end{lem}
To define the notion of having the same wild ramification, we will use the ``rational Brauer trace'' defined as follows. \begin{defn}[{c.f, \cite[Section 4]{SY}, \cite[Section 1]{Yat}}]\label{trd} Let $\Lambda$ be a finite field and $G$ be a pro-finite group which is admissible relatively to $\Lambda$. Let $M$ be an element of $K^\cdot(\Lambda[G])$. For $g\in G$, we take a finite extension $E$ of $\mathbb Q$ containing $\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M)$. We define the rational Brauer trace $\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M)$ by \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M)=\frac{1}{[E:\mathbb Q]}\mathop{\mathrm{Tr}}\nolimits_{E/\mathbb Q}\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M). \end{eqnarray*} It is independent of the choice of $E$. \end{defn}
\begin{lem} [{c.f.\ \cite[Lemma 4.1]{SY}, \cite[Lemma 1.1, 1.2]{Yat}}] \label{formula trd} Let $\Lambda$ be a finite field, $G$ be finite group, and $p$ be a prime number invertible in $\Lambda$. Then, for $M\in K_\cdot(\Lambda[G])$ and $g\in G$ of $p$-power order, we have equalities \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M)&=&\frac{p\cdot\dim_\Lambda (M)^g-\dim_\Lambda (M)^{g^p}}{p-1}, \\ \frac{\dim_\Lambda(M)^g}{p-1} &=&\sum_{n=1}^\infty\frac{1}{p^n}\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g^{p^{n-1}},M), \end{eqnarray*} where in the second equality the limit is taken in the field $\mathbb R$ of real numbers. \end{lem} \begin{proof} The first equality is nothing but \cite[Lemma 1.1]{Yat} and the second follows from the first. \end{proof}
\section{Projectivity and an intertwining formula for the nearby cycle functor} \label{general properties}
For a henselian discrete valuation field $K$, we denote by $I_K$ (resp.\ $P_K$) the inertia (resp.\ wild inertia) subgroup of the absolute Galois group of $K$.
\begin{defn} Let $X\to S$ be a morphism of schemes with $X$ strictly local, that is, the spectrum of a strictly henselian local ring. We say $X$ is {\it essentially of finite type }over $S$ if there exists an $S$-scheme $Y$ of finite type and a geometric point $y$ of $Y$ such that $X$ is isomorphic to the strict localization $Y_{(y)}$ over $S$. \end{defn}
\begin{lem}\label{projectivity} \StraitXlocal. \UopenVgaloisellprime. \begin{enumerate} \item The complex $(R\psi j_!h_*\zz/\ell^n\zz)_x$ is a perfect complex of continuous $\zz/\ell^n\zz[G\times P_K]$-modules for every $n\ge1$. \item\label{trbr equal ell adic tr} For every element $(g,\sigma)\in G\times P_K$, we have an equality \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits((g,\sigma),(R\psi j_!h_*\F_\ell)_x) = \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R^i\psi j_!h_*\qq_\ell)_x). \end{eqnarray*} \end{enumerate} \end{lem} \begin{proof} 1. Since $h_*\zz/\ell^n\zz$ is \'etale locally a constant sheaf of free $\zz/\ell^n\zz[G]$-modules, $j_!h_*\zz/\ell^n\zz$ is in $D^b_{ctf}(X_\eta,\zz/\ell^n\zz[G])$. Since constructibility and being of finite-Torsion dimension are preserved by the nearby cycle functor (\cite[Th\'eor\`eme 3.2]{fin} and \cite[2.1.13]{7xiii}), the complex $(R\psi j_!h_*\zz/\ell^n\zz)_x$ is in $D^b_{ctf}(x,\zz/\ell^n\zz[G])$. By \cite[Proposition 10.2.1]{Fu}, it is a perfect complex of $\zz/\ell^n\zz[G]$-modules. Thus the assertion follows from \cite[Lemma 1.3.2.(2)]{V}.
2. By the assertion 1, we can apply Proposition \ref{inverse limit of complexes} to the inverse system $((R\psi j_!h_*\zz/\ell^n\zz)_x)_n$ to obtain the desired equality. \end{proof}
In the proof of the intertwining formula (Lemma \ref{intertwine}), we use the following elementary lemma. \begin{lem}\label{gp coh} Let $h:V\to U$ be a $G$-torsor of schemes for a finite group $G$.
Let $\mathcal G$ be a sheaf of abelian groups on $U_{\text{\'et}}$. \begin{enumerate} \item The canonical morphism $\mathcal G\to (h_*h^*\mathcal G)^G$ is an isomorphism. \item If $h$ is \'etale, then $H^i(G,h_*h^*\mathcal G)=0$ for $i>0$, and hence we have a canonical isomorphism $\mathcal G\to R\Gamma(G,h_*h^*\mathcal G)$. \end{enumerate} \end{lem} \begin{proof} 1. This follows from $(h_*h^*\mathcal G)_x\cong\bigoplus_{y\mapsto x}\mathcal G_x$.
2. Since group cohomology is compatible with pullback \cite[9.1]{Fu}, we may assume that $U$ is the spectrum of a separably closed field. Then the assertion follows from the vanishing of the usual group cohomology \cite[Proposition 1 in Chapter VII.\ \S.2]{corps}. \end{proof}
\begin{lem}[intertwining formula] \label{intertwine} \StraitXlocal. \UopenVgaloisLambda. \begin{enumerate} \item Let $\mathcal G$ be a locally constant constructible sheaf of $\Lambda$-modules on $U_{\text{\'et}}$
such that the pullback $\mathcal G|_V$ is constant. Let $M$ be the representation of $G$ defined by $\mathcal G$. Then we have a canonical isomorphism \begin{eqnarray*} R\psi j_!\mathcal G \cong R\Gamma^G(R\psi(j_!h_{*}\Lambda)\otimes_\Lambda^LM) \end{eqnarray*} in the derived category of sheaves of $\Lambda$-modules on $X_{\bar s}$ with a continuous action of $G\times I_K$. \item Let $\mathcal G$ be a locally constant constructible complex of $\Lambda$-modules on $U_{\text{\'et}}$
such that each $\mathcal H^q(\mathcal G|_V)$ is constant. We define a virtual representation $M$ of $G$ by $ M=\sum_q(-1)^q\Gamma(V,\mathcal H^q(\mathcal G)) $. Then, for every geometric point $x$ of $X_s$ and every $\sigma\in P_K$, we have an equality \begin{eqnarray} \label{inter} \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(\sigma,(R\psi j_!\mathcal G)_x)
=\frac{1}{|G|}\sum_{g\in G} \mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x) \cdot \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,M), \end{eqnarray} where $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x)$ is defined to be the alternating sum \begin{eqnarray*} \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R^i\psi (j_!h_*\qq_\ell))_x). \end{eqnarray*} \end{enumerate} \end{lem}
\begin{proof} 1. By Lemma \ref{gp coh}, we have a canonical isomorphism $\mathcal G\cong R\Gamma^G(h_*h^*\mathcal G)$. Here, since $h^*\mathcal G$ is a constant sheaf, we have a canonical isomorphism $h_*\Lambda\otimes M\cong h_*h^*\mathcal G$, by the projection formula. Hence, we have canonical isomorphisms \begin{eqnarray*} R\psi j_!\mathcal G &\cong& R\psi(j_!R\Gamma^G(h_*\Lambda\otimes M))\\ &\cong& R\Gamma^GR\psi(j_!(h_*\Lambda\otimes M)). \\ &\cong& R\Gamma^G(R\psi(j_!h_*\Lambda)\otimes^LM). \end{eqnarray*} Here the second isomorphism comes from the commutativity of $R\Gamma^G$ with $j_!$ and $R\psi$ and the last one the projection formula for the nearby cycle functor \cite[2.1.13]{7xiii}.
2. Since both sides are additive in $\mathcal G$, we may assume that $\mathcal G$ is a locally constant and constructible sheaf. By Lemma \ref{projectivity}.1, $(R\psi(j_!h_{*}\Lambda))_x$ defines a class $a\in K^\cdot(\Lambda[G\times P_K])$. We can apply Lemma \ref{proj times mod} to the class $a$ and the class $b\in K_\cdot(\Lambda[G\times P_K])$ defined by $M$. Then we obtain the desired equality using the assertion 1 and Lemma \ref{projectivity}.\ref{trbr equal ell adic tr}. \end{proof}
\section{An analogue of the weight spectral sequence} \label{weight} We establish an analogue of the weight spectral sequence of Rapoport and Zink \cite{RZ} for cohomology with coefficient in the nearby cycle complex.
We first recall the monodromy filtration on the nearby cycle complex and the identification of its graded pieces in the strictly semi-stable case from \cite{SaTweightspec}. Let $Y$ be a strictly semi-stable scheme over a strictly henselian trait $S=\mathop{\mathrm{Spec}}\nolimits\mathcal O_K$. We denote the closed point of $S$ by $s$. We take
a separable closure $\overline K$ of $K$. By \cite[4.5]{Gabber} (or \cite[Lemma 2.5.(1)]{SaTweightspec}), the nearby cycle complex $R\psi\qq_\ell$ is a perverse sheaf on the special fiber $Y_{s}$. We have a filtration, called the monodromy filtration, on $R\psi\qq_\ell$ in the category of perverse sheaves defined as follows. Let $t_\ell:\mathrm{Gal}(\overline K/K)\to \zz_\ell(1)$ be the canonical surjection defined by $\sigma\mapsto(\sigma(\pi^{1/\ell^m})/\pi^{1/\ell^m})_m$ for a prime element $\pi$ of $\mathcal O_K$. We choose an element $T\in\mathrm{Gal}(\overline K/K)$ such that $t_\ell(T)$ is a topological generator of $\zz_\ell(1)$. Then the endomorphism $N=T-1$ of $R\psi\qq_\ell$ is nilpotent \cite[Corollary 2.6]{SaTweightspec} and induces the unique increasing finite filtration $M_\bullet$ on $R\psi\qq_\ell$ such that $NM_{\bullet}\subset M_{\bullet-2}$ and such that $N^k$ induces $\mathrm{Gr}^M_k\cong\mathrm{Gr}^M_{-k}$ (\cite[Proposition 1.6.1]{weilii}).
To give the identification of the graded pieces $\mathrm{Gr}^M_\bullet R\psi\qq_\ell$, we introduce some notations. We write the special fiber $Y_s$ as the union of its irreducible components; $Y_s=\bigcup_{i=1}^rD_i$. For a subset $I\subset\{1,\ldots,r\}$, we put $D_I=\cap_{i\in I}D_i$
and $Y^{(p)}=\coprod_{\substack{I\subset\{1,\ldots,r\}\\ |I|=p+1}}D_I$, for an integer $p\geq0$. We denote the natural morphism $Y^{(p)}\to Y_s$ by $a_{(p)}$. By \cite[Proposition 2.7]{SaTweightspec}, we have a canonical isomorphism \begin{eqnarray*} \mathrm{Gr}^M_pR\psi\qq_\ell \cong \bigoplus_{i\geq\max(0,p)}a_{(-p+2i)*}\qq_\ell(-i)[p-2i]. \end{eqnarray*} We consider a scheme $Z$ separated of finite type over a separably closed field $\Omega$ with a commutative diagram \begin{eqnarray}\label{equiv diagram} \begin{xy} \xymatrix{ Y_{s}\ar[d]&Z\ar[l]\ar[d]\\ s&\mathop{\mathrm{Spec}}\nolimits\Omega.\ar[l] } \end{xy} \end{eqnarray} Then, by \cite[Lemme 5.2.18]{SaM}, we get a natural spectral sequence \begin{eqnarray}\label{spec seq} E_1^{p,q}=\bigoplus_{i\geq\max(0,-p)}H_c^{q-2i}(Z^{(p+2i)},\qq_\ell)(-i) \Rightarrow H_c^{p+q}(Z,R\psi\qq_\ell), \end{eqnarray} where $Z^{(p)}$ is the fiber product $Y^{(p)}\times_{Y_s}Z$.
We note that if the structural morphism $Y\to S=\mathop{\mathrm{Spec}}\nolimits\mathcal O_K$ is equipped with an action of a finite group $G$ then the above constructions are equivariant with respect to natural actions. More precisely, let $K_0$ be the fixed subfield $K^G$ and write $H$ for the Galois group $\mathrm{Gal}(K/K_0)$. Since $\bar j:Y_{\overline K}\to Y$ has a natural $G\times_H\gal(\Kbar/K_0)$-structure, the nearby cycle complex $R\psi\qq_\ell=i^*R\bar j_*\qq_\ell$, where $i$ is the closed immersion $Y_s\to Y$, also has a natural $G\times_H\gal(\Kbar/K_0)$-structure. The monodromy filtration $M_\bullet$ on $R\psi\qq_\ell$ is a filtration by $G\times_H\gal(\Kbar/K_0)$-stable sub-perverse sheaves. Therefore, if we consider the trivial action on $\Omega$ and if the diagram (\ref{equiv diagram}) is $G$-equivariant, then the spectral sequence (\ref{spec seq}) is $G\times_H\gal(\Kbar/K_0)$-equivariant. Let us summarize the above argument. \begin{lem}\label{weight spec seq} Let $S$ be a strictly henselian trait with generic point $\mathop{\mathrm{Spec}}\nolimits K$ and closed point $s$. We consider a $G$-equivariant commutative diagram $$ \begin{xy} \xymatrix{ Y\ar[d]&Y_{s}\ar[l]\ar[d]&Z\ar[l]\ar[d]\\ S&s\ar[l]&\mathop{\mathrm{Spec}}\nolimits\Omega,\ar[l] } \end{xy} $$ such that the vertical arrows are morphisms separated of finite type, $\Omega$ is a separably closed field, and the actions of $G$ on $s$ and $\Omega$ are trivial. We write $K_0$ for the fixed subfield $K^G$ of $K$ and $H$ for the Galois group of the extension $K/K_0$. We assume that $Y$ is strictly semi-stable over $S$ and let $Z^{(p)}$ be as above. Then, we have a $G\times_H\gal(\Kbar/K_0)$-equivariant spectral sequence \begin{eqnarray*} E_1^{p,q}=\bigoplus_{i\geq\max(0,-p)}H_c^{q-2i}(Z^{(p+2i)},\qq_\ell)(-i) \Rightarrow H_c^{p+q}(Z,R\psi\qq_\ell), \end{eqnarray*} where $G\times_H\gal(\Kbar/K_0)$ acts on the $E_1$-terms through the surjection $G\times_H\gal(\Kbar/K_0)\to G$. \end{lem}
\section{$\ell$-independence and existence of a fixed point} \label{fixed} We prove the $\ell$-independence of the trace $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_{x})$ and the existence of a fixed geometric point when the trace is nonzero (Proposition \ref{general formulation}). Using these results, we prove that in the intertwining formula (\ref{inter}) only terms with $g$ being ``wildly ramified'' contribute (Proposition \ref{contribution}, Corollary \ref{str hens}); We use the existence of a fixed geometric point to deduce that $g$ is ``ramified'' when the trace is nonzero. Further the $\ell$-independence combined with the projectivity of the nearby cycle complexes (Lemma \ref{projectivity}) and the Brauer theory (Lemma \ref{ell sing}) implies that $g$ is ``wild''
(Proposition \ref{contribution}.2).
The $\ell$-independence and the existence of a fixed geometric point are eventually reduced to the following ``geometric'' lemma. \begin{lem}\label{key lemma over k} Let $Z$ be a scheme separated of finite type over a separably closed field $\Omega$ and $G$ be a finite group acting on $Z$ by $\Omega$-automorphisms. Let $\ell$ be a prime number distinct from the characteristic of $\Omega$. Then for every $g\in G$ the following hold: \begin{enumerate} \item The alternating sum \begin{eqnarray}\label{alt sum over k} \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits(g,H_c^i(Z,\qq_\ell)) \end{eqnarray} is an integer independent of the prime number $\ell$ distinct from the characteristic of $\Omega$. \item We take a $G$-equivariant compactification $\overline Z$ over $\Omega$. Assume further that the alternating sum (\ref{alt sum over k}) is nonzero. Then there exists a geometric point of $\overline Z$ fixed by $g$. \end{enumerate} \end{lem} This lemma seems to be well-known. It can be reduced to the case where $Z$ is smooth and proper over $\Omega$
using de\ Jong's result on alterations. Then it follows from the Lefschetz trace formula. But for the completeness, we give a reference for the lemma: \begin{proof} The assertions are special cases of results in \cite{V}. The assertion 1 (resp.\ 2) follows from \cite[Proposition 4.2]{V} (resp.\ \cite[Proposition 5.1]{V}); by putting, under the notations there, $S=\mathop{\mathrm{Spec}}\nolimits\Omega[[t]]$, $V=Z\times_\Omega \Omega((t))$, $\eta_1=\mathop{\mathrm{Spec}}\nolimits\Omega((t))$, and $H=G$ (resp.\ $S=S_1=\mathop{\mathrm{Spec}}\nolimits\Omega[[t]]$, $Y_1=\overline Z\times_\Omega\Omega[[t]]$, $Z_1=Z\times_\Omega \Omega((t))$, $H=G$). \end{proof}
The following is the heart of this paper. \begin{prop}\label{general formulation} Let $S$ be a strictly henselian trait with generic point $\eta=\mathop{\mathrm{Spec}}\nolimits K$ and closed point $s$. Let $G$ be a finite group acting on $S$. Assume that the action of $G$ on $s$ is trivial. We consider a following $G$-equivariant commutative diagram \begin{eqnarray}\label{Z diagram} \begin{xy} \xymatrix{ V\ar[r]^j&Y_\eta\ar[r]\ar[d]&Y\ar[d]&Y_s\ar[l]\ar[d]&Z\ar[l]\ar[d]\\ &\eta\ar[r]&S&s\ar[l]&\mathop{\mathrm{Spec}}\nolimits\Omega,\ar[l] } \end{xy} \end{eqnarray} such that the vertical arrows are morphisms separated of finite type, $j:V\to Y$ is a dense open immersion, and $\Omega$ is a separably closed field with the trivial $G$-action. Let $\ell$ be a prime number invertible on $S$. We write $K_0$ for the fixed subfield $K^G$ of $K$ and $H$ for the Galois group of the extension $K/K_0$. Then, for every $(g,\sigma)\in G\times_H\gal(\Kbar/K_0)$, the following hold: \begin{enumerate} \item The eigenvalues of $(g,\sigma)$ acting on $H^i_c(Z,R\psi(j_!\qq_\ell))$ are roots of the unity. \item The alternating sum \begin{eqnarray}\label{alt sum} \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits((g,\sigma),H^i_c(Z,R\psi(j_!\qq_\ell))) \end{eqnarray} is an integer independent of the prime number $\ell$ invertible on $S$. \item
We take a $G$-equivariant compactification $\overline{Z}$ of $Z$ over $\Omega$. We suppose that the alternating sum (\ref{alt sum}) is non-zero. Then there exists a geometric point of $\overline{Z}$ which is fixed by $g$. \end{enumerate} \end{prop}
\begin{rem} Mieda also obtained a similar result in \cite[Theorem 6.1.3]{M}. He also treated the case of an algebraic correspondence. In the case of an algebraic correspondence coming from an automorphism of finite order, \cite[Theorem 6.1.3]{M} is equivalent to the special case of Proposition \ref{general formulation}.2 where $V=Y$, $\mathop{\mathrm{Spec}}\nolimits \Omega=s$, and $Z=Y_s$. \end{rem}
\begin{proof}[Proof of Proposition \ref{general formulation}] We will reduce the proof to the ``semi-stable'' case. First of all, by the fact that the formation of $R\psi$ is compatible with change of the trait $S$ (\cite[Proposition 3.7]{fin}), we may assume that $S$ is complete, in particular excellent (the excellentness will be needed to use a result of de\ Jong). We argue by induction on $d=\dim Y_\eta$ and use the following claims. The $d=0$ case is straightforward. \begin{claim}\label{can shrink} Let $V'$ be another $G$-stable dense open subscheme of $Y_\eta$. Then, under the induction hypothesis, the assertions for $V'$ are equivalent to those for $V$. \end{claim} \begin{proof} We may assume that $V'=Y$. Let $T$ be the complement $Y\setminus V$.
We denote by $i$ the closed immersion $T_\eta\to Y_\eta$. Then we have a $G\times_H\gal(\Kbar/K_0)$-equivariant long exact sequence \begin{eqnarray*} \cdots\to H_c^{n-1}(Z,R\psi i_*\qq_\ell) \to H_c^{n}(Z,R\psi j_!\qq_\ell) \to H_c^{n}(Z,R\psi \qq_\ell) \to\cdots. \end{eqnarray*} Since we have $H_c^{n}(Z,R\psi i_*\qq_\ell)\cong H_c^{n}(Z\times_{Y}T,R\psi_{T/S} \qq_\ell)$ and $\dim T_\eta<\dim Y_\eta$, we get the claim by the induction hypothesis. \end{proof} \begin{claim}\label{can extend the base} Let $K'$ be a finite quasi-Galois extension of $K$ which is also quasi-Galois over $K_0$ and $S'$ the normalization of $S$ in $K'$ with generic point and closed point denoted by $\eta'$ and $s'$.
We write $H'$ for the automorphism group $\mathrm{Aut}(K'/K_0)$ and $K'_0$ for the fixed subfield $(K')^{H'}$ and put $G'=G\times_HH'$. Then, the assertions for the $G$-equivariant diagram (\ref{Z diagram}) are equivalent to those for the $G'$-equivariant diagram \begin{eqnarray*} \begin{xy} \xymatrix{ V_{\eta'}\ar[r]^{j'}&Y_{\eta'}\ar[r]\ar[d]&Y\times_SS'\ar[d]&Y_{s'}\ar[l]\ar[d]&Z\times_{\mathop{\mathrm{Spec}}\nolimits\Omega}\mathop{\mathrm{Spec}}\nolimits\Omega'\ar[l]\ar[d]\\ &\eta'\ar[r]&S'&s'\ar[l]&\mathop{\mathrm{Spec}}\nolimits\Omega',\ar[l] } \end{xy} \end{eqnarray*} with $\Omega'$ being a separably closed field extension of $\Omega$. \end{claim} \begin{proof} This follows from the $G\times_H\gal(\Kbar/K_0)\cong G'\times_{H'}\mathrm{Gal}(\overline{K'}/K'_0)$-equivariant isomorphism $R\psi_{Y/S}j_!\qq_\ell\cong R\psi_{Y'/S'}j'_!\qq_\ell$. \end{proof}
\begin{claim}\label{red to alt} Assume that $Y$ is an integral scheme. Let $G'$ be a finite group with a surjection $G'\to G$. Let $Y'$ be an integral scheme and $(Y',G')\to (Y,G)$ a Galois alteration, i.e, a $G'$-equivariant proper generically finite surjection $Y'\to Y$ such that the fixed subfield $K(Y')^{\Gamma}$ of the function field $K(Y')$ of $Y'$ by $\Gamma=\ker(G'\to \mathrm{Aut}(Y))$ is purely inseparable over the function field $K(Y)$ of $Y$. We put $V'=V\times_YY'$ and $Z'=Z\times_{Y_s}Y'_s$. Then the assertions for the $G'$-equivariant diagram \begin{eqnarray}\label{altered diagram} \begin{xy} \xymatrix{ V'\ar[r]^{j'}&Y'_{\eta}\ar[r]\ar[d]&Y'\ar[d]&Y'_{s}\ar[l]\ar[d]&Z'\ar[l]\ar[d]\\ &\eta\ar[r]&S&s\ar[l]&\mathop{\mathrm{Spec}}\nolimits\Omega,\ar[l] } \end{xy} \end{eqnarray} imply those for the original diagram (\ref{Z diagram}). \end{claim} \begin{proof} By Claim \ref{can shrink} we may assume that the morphism $V'\to V$ is finite and that the natural morphism $V'\to V'/\Gamma$ is \'etale. Then, by Lemma \ref{Gamma fixed part} below, the assertion 1 for the original diagram (\ref{Z diagram}) follows from the assertion 1 for the altered diagram (\ref{altered diagram}). Further, by the same lemma, we have \begin{eqnarray*} \mathop{\mathrm{Tr}}\nolimits((g,\sigma),H_c^q(Z,R\psi_{Y/S} j_!\qq_\ell)) =
\frac{1}{|\Gamma|}\sum_{g'} \mathop{\mathrm{Tr}}\nolimits((g',\sigma),H_c^q(Z',R\psi_{Y'/S} j'_!\qq_\ell)), \end{eqnarray*} where $g'$ runs over elements of $G'$ which induce the same automorphism of $Y$ as $g$. Thus, the assertions 2 and 3 for the original diagram (\ref{Z diagram})
follow from those for the altered diagram (\ref{altered diagram}).
\end{proof} \begin{lem} \label{Gamma fixed part} The natural morphism $ H_c^i(Z,R\psi_{Y/S}(j_!\qq_\ell)) \to H_c^i(Z',R\psi_{Y'/S}(j'_!\qq_\ell))^\Gamma $ is an isomorphism. \end{lem} \begin{proof} In general, for an $\ell$-adic sheaf $\mathcal F$ on a scheme, with an action of a finite group $G$ which is trivial on the scheme, we have a natural morphism $N:\mathcal F\to\mathcal F^G$ defined by $x\to\sum_{g\in G}gx$.
We note that $N$ induces the multiplication-by-$|G|$ map on $\mathcal F^G$.
We will apply this observation to $\mathcal F=h_*h^*\qq_\ell$ to get the inverse of the map in the assertion, where $h$ denote the natural morphism $V'\to V$.
In the following we write simply $R\psi$ for $R\psi_{Y/S}$. By the proper base change theorem, we can identify the map in the assertion with the natural map \begin{eqnarray*} \alpha:H_c^i(Z,R\psi(j_!\qq_\ell))\to H_c^i(Z,R\psi(j_!h_*h^*\qq_\ell))^\Gamma. \end{eqnarray*} induced by $\qq_\ell\to h_*h^*\qq_\ell$. We define a map \begin{eqnarray*} \beta:H_c^i(Z,R\psi(j_!h_*h^*\qq_\ell))\to H_c^i(Z,R\psi(j_!\qq_\ell)) \end{eqnarray*} to be the morphism induced by $N:h_*h^*\qq_\ell\to(h_*h^*\qq_\ell)^\Gamma\cong\qq_\ell$. Then, by the observation in the beginning,
$\beta\circ\alpha$ is the multiplication-by-$|\Gamma|$ map on $H_c^i(Z,R\psi(j_!\qq_\ell))$. Further, $\alpha\circ\beta$ is the map given by $x\mapsto\sum_{g\in\Gamma}gx$,
and hence it induces the multiplication-by-$|\Gamma|$ map on $H_c^i(Z,R\psi(j_!h_*h^*\qq_\ell))^\Gamma$. Thus, the assertion follows. \end{proof}
By Claim \ref{can extend the base}, we may assume that every irreducible component of $Y_\eta$ is geometrically integral over $\eta$. By applying Claim \ref{red to alt} to the normalization $Y'\to Y$, we may assume that $Y$ is normal. By considering each orbit of $\langle g\rangle$ acting on the set of connected components of $Y$, we may assume that $\langle g\rangle$ acts transitively on the set of connected components of $Y$. But, then the traces are nonzero only if $Y$ is connected. Thus, we may assume that $Y$ is normal and connected and that $Y_\eta$ is geometrically integral over $\eta$. Then, by \cite[Proposition 4.4.1]{V}, we can find a surjection $G'\to G$ of groups and a $G'$-equivariant diagram \begin{eqnarray}\label{alt diagram} \begin{xy}\xymatrix{ Y\ar[d]&Y\times_SS'\ar[l]\ar[d]&Y'\ar[l]_{\phi}\ar[ld]\\ S&S'\ar[l] }\end{xy} \end{eqnarray} such that \begin{itemize} \item $S'$ is a finite extension of $S$, \item $(Y',G')\to(Y,G)$ is a Galois alteration, \item $Y'$ is strictly semi-stable over $S'$. \end{itemize} By Claim \ref{can extend the base} we may assume that $S'=S$. Further by Claim \ref{red to alt} below, we may assume that $Y$ is strictly semi-stable over $S$. By Claim \ref{can shrink}, we may further assume that $V=Y_\eta$.
By Lemma \ref{weight spec seq}, we have a $G\times_H\gal(\Kbar/K_0)$-equivariant spectral sequence \begin{eqnarray}\label{wss} E_1^{p,q}=\bigoplus_{i\geq\max(0,-p)}H_c^{q-2i}(Z^{(p+2i)},\qq_\ell)(-i) \Rightarrow H_c^{p+q}(Z,R\psi\qq_\ell), \end{eqnarray} where we use the notations in Section \ref{weight}. Since $G\times_H\gal(\Kbar/K_0)$ acts on the $E_1$ terms through the surjection $G\times_H\gal(\Kbar/K_0)\to G$, the assertion 1 follows. By the spectral sequence (\ref{wss}), we obtain an equality \begin{eqnarray*} \sum_{\substack{p,q\\i\geq\max(0,-p)}}(-1)^{p+q} \mathop{\mathrm{Tr}}\nolimits(g,H_c^{q-2i}(Z^{(p+2i)},\qq_\ell)) = \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits((g,\sigma),H_c^{i}(Z,R\psi\qq_\ell)). \end{eqnarray*} Then the assertion 2 follows from Lemma \ref{key lemma over k}.1. Further, if the right hand side is nonzero, then the alternating sum \begin{eqnarray*} \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits(g,H_c^i(Z^{(p)},\qq_\ell)) \end{eqnarray*} is nonzero for some $p\geq0$. We take a $G$-equivariant compactification $\overline{Z^{(p)}}$ of $Z^{(p)}$ over $\Omega$ with a morphism $\overline{Z^{(p)}}\to\overline Z$ extending the natural morphism $Z^{(p)}\to Z$. Then, by Lemma \ref{key lemma over k}.2, we find a geometric point of $\overline{Z^{(p)}}$ fixed by $g$. This concludes the proof.
\end{proof} \begin{defn} \label{ramified} Let $U$ be a dense open subscheme of a scheme $X$ which is normal and connected and $V\to U$ be a Galois \'etale covering with Galois group $G$. We denote by $Y$ the normalization of $X$ in $V$. \begin{enumerate} \item We say an element $g\in G$ is {\it ramified on} $X$ if there exists a geometric point $y$ of $Y$ fixed by $g$. \item We say an element $g\in G$ is {\it wildly ramified on} $X$ if the order of $g$ is a power of a prime number $p$ and there exists a geometric point $y$ of residual characteristic $p$ of $Y$ which is fixed by $g$. \end{enumerate} \end{defn}
\begin{prop}\label{contribution} Let $S$ be a strictly henselian trait, $X$ an $S$-scheme of finite type, $j:U\to X_\eta$ a dense open immersion with $U$ being normal and connected, and $h:V\to U$ be a Galois \'etale covering with Galois group $G$.
Let $(g,\sigma)\in G\times \mathrm{Gal}(\overline K/K)$ and $x$ be a geometric point of the closed fiber $X_s$. If $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x)$ is nonzero, then \begin{enumerate} \item for any compactification $X'$ of $U$ over $X$, the element $g$ is ramified on $X'$ in the sense of Definition \ref{ramified},
\item the order of $g$ is a power of the residual characteristic $p$ of $S$. \end{enumerate} Thus, $g$ is wildly ramified on any compactification $X'$ of $U$ over $X$. \end{prop}
By a standard limit argument, we can deduce the following from Proposition \ref{contribution}. \begin{cor}\label{str hens} \StraitXlocal. Let $j:U\to X_\eta$ be a dense open immersion with $U$ normal, $h:V\to U$ a Galois \'etale covering with Galois group $G$, and $\ell$ a prime number distinct from the residual characteristic of $S$. Then for every $(g,\sigma)\in G\times\mathrm{Gal}(\overline K/K)$, if $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x)$ is nonzero, then $g$ is wildly ramified on any compactification $X'$ of $U$ over $X$. \end{cor}
\begin{rem} The case where the compactification $X'$ is $X$ itself, Proposition \ref{contribution}.1 is almost trivial, because we have a canonical isomorphism \begin{eqnarray*} (R\psi(j_!h_{*}\qq_\ell))_x\cong\bigoplus_{y\mapsto x}(R\psi(j'_!\qq_\ell))_y, \end{eqnarray*} where the direct sum is taken over geometric points $y$ of $Y$ lying above $x$. \end{rem} \begin{proof}[Proof of Proposition \ref{contribution}] 1. Let $Y'$ be the normalization of $X'$ in $V$. Put $Z=Y'_s\times_{X_s}x$. Then, by the proper base change theorem, we have a canonical isomorphism $(R\psi(j_!h_{*}\qq_\ell))_x\cong R\Gamma(Z,R\psi j'_!\qq_\ell)$. Thus, the alternating sum \begin{eqnarray*} \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits((g,\sigma),H^i(Z,R\psi j'_!\qq_\ell)) \end{eqnarray*} is nonzero.
Applying Proposition \ref{general formulation}.3 to the diagram $$ \begin{xy} \xymatrix{ V\ar[r]^{j'}&Y'_\eta\ar[r]\ar[d]&Y'\ar[d]&Y'_s\ar[l]\ar[d]&Z\ar[l]\ar[d]\\ &\eta\ar[r]&S&s\ar[l]&x,\ar[l] } \end{xy} $$ we find a geometric point of $Z$ fixed by $g$ for some $\alpha$, which gives a geometric point of $Y'_s$ over $x$ which is fixed by $g$.
2. Let $\ell'$ be a prime number distinct from $p$. By Lemma \ref{projectivity}.1, the complex $(R\psi j_!h_*\mathbb F_{\ell'})_x$ is a perfect complex of $\mathbb F_{\ell'}[G\times P_K]$-modules, and hence, by Lemma \ref{ell sing}.1, we can take an element $a\in K^\cdot(\mathbb Z_{\ell'}[G\times P_K])$ whose reduction modulo $\ell'$ is the class of $(R\psi j_!h_*\mathbb F_{\ell'})_x$ in $K^\cdot(\mathbb F_{\ell'}[G\times P_K])$. Further, by Lemma \ref{projectivity}.2, we have an equality \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits((g,\sigma),(R\psi j_!h_*\mathbb F_{\ell'})_x) = \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R^i\psi j_!h_*\mathbb Q_{\ell'})_x). \end{eqnarray*} By Lemma \ref{ell sing}.2, if the left hand side is nonzero, then the order of $g$ is prime-to-$\ell'$. Here, by Proposition \ref{general formulation}.2 applied to $Z=\mathop{\mathrm{Spec}}\nolimits\Omega=x$, the right hand side is an integer independent of $\ell'\ne p$. Thus, if the alternating sum is nonzero, then $g$ is of $p$-power order. \end{proof}
\section{Comparing wild ramification of \'etale sheaves}\label{compare} We recall the notions of having the same wild ramification and of having universally the same conductors in Subsection \ref{subsec swr} and \ref{subsec sc} respectively, and then, in Subsection \ref{relation}, recall the main theorem of \cite{K} in Theorem \ref {swr sc}, which states that the two notions are equivalent for sheaves on a ``surface''. Further, we remove the assumption on the dimension in the case where the schemes in consideration are algebraic varieties in Theorem \ref{swr sc var}.
\subsection{The notion of having the same wild ramification}\label{subsec swr}
The notion of having the same wild ramification is originally introduced by Deligne-Illusie (c.f. \cite[Th\'eor\`eme 2.1]{I}, \cite[Definition 2.2.1, Definition 2.3.1]{V}, and \cite[Definition 5.1]{SY}, and \cite[Definition 2.2]{K}). Here we follow \cite{K} except that we use the rational Brauer trace (Definition \ref{trd}) instead of the dimensions of fixed parts.
We prefer the rational Brauer trace because the intertwining formula for the rational Brauer trace is given in the same form as that for the Brauer trace (see Lemma \ref{int trd}), whereas that for the dimensions of fixed parts is more complicated. The definition remains equivalent even after this change (Lemma \ref{swr trd}). \begin{defn}
\label{swr} Let $S$ be an excellent noetherian scheme and $U$ an $S$-scheme separated of finite type. Let $\Lambda$ and $\Lambda'$ be finite fields whose characteristics are invertible on $S$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $U_{\text{\'et}}$. \begin{enumerate} \item\label{smooth case} Assume that $U$ is normal and connected and that $\mathcal H^q(\mathcal F)$ and $\mathcal H^q(\mathcal F')$ are locally constant for all $q$. We take a Galois \'etale covering $V\to U$ such that
$\mathcal H^q(\mathcal F|_V)$ and $\mathcal H^q(\mathcal F'|_V)$ are constant for all $q$. We define a virtual representation $M$ of $G$ by $ M=\sum_q(-1)^q\Gamma(V,\mathcal H^q(\mathcal F)) $. We say $\mathcal F$ and $\mathcal F'$ {\it have the same wild ramification} over $S$ if there exists a normal compactification $X$ of $U$ over $S$ such that
for every element $g\in G$ which is wildly ramified on $X$ (in the sense of Definition \ref{ramified}), we have $\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M)=\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M')$.
\item In general, we say $\mathcal F$ and $\mathcal F'$ {\it have the same wild ramification} over $S$ if there exists a finite partition $U=\coprod_iU_i$ such that each $U_i$ is a normal and locally closed subset of $U$,
that $\mathcal H^q(\mathcal F|_{U_i})$ and $\mathcal H^q(\mathcal F'|_{U_i})$ are locally constant for all $q$,
and that $\mathcal F|_{U_i}$ and $\mathcal F'|_{U_i}$ have the same wild ramification over $S$ in the sense of \ref{smooth case}. \end{enumerate} \end{defn}
Lemma \ref{formula trd} implies that the above definition is compatible with that in Section \ref{intro} and \cite[Section 2]{K} (and also with \cite[Definition 5.1]{SY} when the complexes $\mathcal F$ and $\mathcal F'$ are sheaves): \begin{lem}\label{swr trd} Under the notations and assumptions in Definition \ref{swr}.1, $\mathcal F$ and $\mathcal F'$ have the same wild ramification if and only if there exists a normal compactification $X$ of $U$ over $S$ such that for every element $g\in G$ which is wildly ramified on $X$ (in the sense of Definition \ref{ramified}), we have $\dim_\Lambda(M)^\sigma=\dim_{\Lambda'}(M')^\sigma$. \end{lem}
We recall a valuative criterion for having the same wild ramification, which we will use in the proof of Theorem \ref{swr sc var}.
Let $\mathcal O_F$ be a strictly henselian valuation ring with field of fractions $F$.
We choose a separable closure $\overline F$ of $F$. The wild inertia subgroup of the absolute Galois group $G_F=\mathrm{Gal}(\overline F/F)$ of $F$ is defined to be the unique (pro-)$p$-Sylow subgroup of $G_F$, which we denote by $P_F$.
Let $S$ be an excellent noetherian scheme, $U$ be a scheme separated of finite type over $S$ which is normal and connected, $V\to U$ be a $G$-torsor for a finite group $G$. We consider commutative diagrams of the form \begin{eqnarray}\label{val diagram} \begin{xy}\xymatrix{ V\ar[d]&\mathop{\mathrm{Spec}}\nolimits\overline F\ar[l]\ar[d]\\ U\ar[d]&\mathop{\mathrm{Spec}}\nolimits F\ar[l]\ar[d]\\ S&\mathop{\mathrm{Spec}}\nolimits\mathcal O_F,\ar[l] }\end{xy}\end{eqnarray} with $\mathcal O_F$ a strictly henselian valuation ring and $\overline F$ a separable closure of the field of fractions $F$ of $\mathcal O_F$. For each commutative diagram (\ref{val diagram}), we have a natural map $\mathrm{Gal}(\overline F/F)\to G$.
\begin{lem}[{\cite[Section 6]{Vnodal}, c.f.\ \cite[Lemma 2.4]{Katoell}}]\label{val ram} For $g\in G$, the following are equivalent \begin{enumerate} \item $g$ is ramified (resp.\ wildly ramified) on every compactification of $U$ over $S$, \item there exists a commutative diagram (\ref{val diagram}) and an element $\sigma\in \mathrm{Gal}(\overline F/F)$ (resp.\ an element $\sigma\in P_F$ of the wild inertia subgroup) such that $g$ is the image of $\sigma$ by the natural map $\mathrm{Gal}(\overline F/F)\to G$. \end{enumerate} Further, if $U$ is regular, then the above conditions are equivalent to \begin{enumerate}\setcounter{enumi}{2} \item there exists a commutative diagram (\ref{val diagram}) such that the image of $\mathop{\mathrm{Spec}}\nolimits F$ in $U$ is the generic point and an element $\sigma\in \mathrm{Gal}(\overline F/F)$ (resp.\ an element $\sigma\in P_F$ of the wild inertia subgroup) such that $g$ is the image of $\sigma$ by the natural map $\mathrm{Gal}(\overline F/F)\to G$. \end{enumerate} \end{lem}
Let $S$ be an excellent noetherian scheme and $U$ be a scheme separated of finite type over $S$ which is normal and connected. We consider commutative diagrams of the form \begin{eqnarray}\label{val diagram 2} \begin{xy}\xymatrix{ &\overline \eta=\mathop{\mathrm{Spec}}\nolimits\overline F\ar[d]\\ U\ar[d]&\mathop{\mathrm{Spec}}\nolimits F\ar[l]\ar[d]\\ S&\mathop{\mathrm{Spec}}\nolimits\mathcal O_F,\ar[l] }\end{xy}\end{eqnarray} with $\mathcal O_F$ a strictly henselian valuation ring and $\overline F$ a separable closure of the field of fractions $F$ of $\mathcal O_F$. Lemma \ref{val ram} immediately implies the following valuative criterion for having same wild ramification. \begin{lem}\label{val criterion}
Let $\mathcal F$ and $\mathcal F'$ be constructible complex of $\Lambda$-modules and $\Lambda'$-modules on $U_{\text{\'et}}$ respectively such that $\mathcal H^q(\mathcal F)$ and $\mathcal H^q(\mathcal F')$ are locally constant for every $q$. Then the following are equivalent; \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$, \item for every commutative diagram (\ref{val diagram 2}) and for every element $\sigma\in P_F$ of the wild inertia subgroup, we have \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,\mathcal F_{\overline\eta})=\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,\mathcal F'_{\overline\eta}). \end{eqnarray*}
\end{enumerate} Further, if $U$ is regular, then the above conditions are equivalent to \begin{enumerate}\setcounter{enumi}{2} \item for every commutative diagram (\ref{val diagram 2}) such that the image of $\mathop{\mathrm{Spec}}\nolimits F$ in $U$ is the generic point, for every generic geometric point $\overline \eta$, and for every element $\sigma\in P_F$ of the wild inertia subgroup, we have \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,\mathcal F_{\overline\eta})=\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,\mathcal F'_{\overline\eta}). \end{eqnarray*} \end{enumerate}
\end{lem}
We recall some elementary properties of the notion of having the same wild ramification, which follow immediately from the valuative criterion.
\begin{lem}\label{list swr} Let $S$ be an excellent noetherian scheme, $U$ an $S$-scheme separated of finite type, and $\mathcal F$ and $\mathcal F'$ constructible complexes on $U_{\text{\'et}}$. \begin{enumerate} \item(\cite[Lemma 2.4]{K}). Having the same wild ramification is preserved by pullback, that is, if we have a commutative diagram \be\begin{xy}\xymatrix{U'\ar[r]^h\ar[d]&U\ar[d]\\S'\ar[r]&S, }\end{xy}\ee of excellent noetherian schemes with $U'\to S'$ separated of finite type and if $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$, then $h^*\mathcal F$ and $h^*\mathcal F'$ have the same wild ramification over $S'$.
\item(\cite[Lemma 3.9]{K}). Assume that there exists $G$-torsor $V\to U$ for a finite group $G$ such that $\mathcal H^q(\mathcal F)|_V$ and $\mathcal H^q(\mathcal F')|_V$ are constant for every $q$. For an element $\sigma\in G$, we denote the quotient $V/\langle\sigma\rangle$ by $V_\sigma$.
Then $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$
if and only if $\mathcal F|_{V_\sigma}$ and $\mathcal F'|_{V_\sigma}$ have the same wild ramification over $S$ for every element $\sigma\in G$ of prime-power order. \end{enumerate} \end{lem}
\subsection{The notion of having universally the same conductors}\label{subsec sc} For two constructible complexes having universally the same conductors means, roughly speaking, that the two complexes have the same Artin conductors after restricting to any curve. When we talk about ``having universally the same conductors'', we work over a base scheme with the property that every closed point has perfect residue field. This assumption on the base is made just to work with the classical Artin conductor and can be removed using Abbes-Saito's theory \cite{AS}.
For a henselian trait $T$ with generic point $\eta$ and closed point $t$ with algebraically closed residue and for a constructible complex $\mathcal{F}$ of $\Lambda$-modules on $T$, the Artin conductor $a(\mathcal{F})$ is defined by $a(\mathcal{F})=\mathop{\mathrm{rk}}\nolimits(\mathcal{F}_{\bar\eta})-\mathop{\mathrm{rk}}\nolimits(\mathcal{F}_{t})+\mathop{\mathrm{Sw}}\nolimits(\mathcal{F}_{\bar\eta})$. For the definition of the Swan conductor $\mathop{\mathrm{Sw}}\nolimits(\mathcal F_{\bar\eta})$, see \cite[19.3]{Se}.
Let $X$ be a regular scheme of dimension one whose closed points have perfect residue fields and let $\mathcal{F}$ be a constructible complex of $\Lambda$-modules on $X$.
For a geometric point $x$ over a closed point of $X$,
the Artin conductor $a_x(\mathcal{F})$ at $x$ is defined by $a_x(\mathcal{F})=a(\mathcal{F}|_{X_{(x)}})$.
Let $X$ be an integral $S$-scheme separated of finite type. We say $X$ is an $S$-{\it curve} if $X$ has a compactification over $S$ which is of dimension 1.
\begin{defn}[{\cite[Definition 2.5]{K}}] \label{sc} Let $S$ be an excellent noetherian scheme such that the residue field of every closed point is perfect, and $U$ an $S$-scheme separated of finite type. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $U$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible on $S$.
We say $\mathcal F$ and $\mathcal F'$ {\it have universally the same conductors over} $S$ if, for every morphism $g:C\to U$ from a regular $S$-curve $C$ and for every geometric point $v$ over a closed point of a regular compactification $\overline C$ of $C$ over $S$, we have an equality $a_v(j_!g^*\mathcal F)=a_v(j_!g^*\mathcal F')$, where $j$ denotes the open immersion $C\to \overline C$.
\end{defn}
\subsection{Relation between the two notions}\label{relation} We can easily see that having the same wild ramification implies having universally the same conductors.
The main theorem of \cite{K} states that the converse holds if $U$ is a ``surface'': \begin{thm}[{\cite[Theorem 3.2]{K}}] \label{swr sc} Let $S$ be an excellent noetherian scheme such that the residue field of every closed point is perfect. Let $U$ be an $S$-scheme separated of finite type which have a compactification over $S$ of dimension $\le2$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $U_{\text{\'et}}$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible on $U$. Then, the following are equivalent; \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$ \item $\mathcal F$ and $\mathcal F'$ have universally the same conductors over $S$. \end{enumerate} \end{thm}
We prove the following refinement of Theorem \ref{swr sc} in the case where $U$ and $S$ are algebraic varieties. \begin{thm}\label{swr sc var} Let $S$ be a scheme of finite type over a perfect field of characteristic $p$ and $U\to S$ be a morphism separated of finite type. Let $\mathcal F$ and $\mathcal F'$ be constructible complex of $\Lambda$-modules and $\Lambda'$-modules on $U_{\text{\'et}}$ respectively, for finite fields $\Lambda$ and $\Lambda'$ of characteristic distinct from $p$. Then the following are equivalent; \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$, \item $\mathcal F$ and $\mathcal F'$ have universally the same conductors over $S$. \end{enumerate} \end{thm}
We recall that in \cite{K}, Theorem \ref{swr sc} is reduced to Lemma \ref{reg case} below using resolution of singularities, for which we need the assumption on dimension.
\begin{lem}[{\cite[Lemma 3.7]{K}}] \label{reg case} Under the notation in Definition \ref{sc}, assume that there exists $\mathbb Z/p^e\mathbb Z$-torsor $V\to U$ for some $e\ge0$
such that $\mathcal H^q(\mathcal F)|_V$ and $\mathcal H^q(\mathcal F')|_V$ are constant for all $q$. Further we assume that $U$ admits a regular compactification over $S$. Then, $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$ if they have universally the same conductors over $S$. \end{lem}
We reduce Theorem \ref{swr sc var} to Lemma \ref{reg case} using instead purely inseparable local uniformization (Theorem \ref{temkin}) due to Temkin.
\begin{defn}\label{local unif} Let $X$ be an integral noetherian scheme. \begin{enumerate} \item A valuation ring $R$ with field of fractions $K(X)$ is {\it centered on }$X$ if the natural morphism $\mathop{\mathrm{Spec}}\nolimits K(X)\to X$ factors through $\mathop{\mathrm{Spec}}\nolimits K(X)\to \mathop{\mathrm{Spec}}\nolimits R$. \item Let $Y_1,\ldots,Y_m$ be integral schemes which are separated of finite type over $X$.
We say $Y=\coprod_{i=1}^mY_i\to X$ is an $h$-{\it covering} if each $Y_i\to X$ is a generically finite dominant morphism and if, for every valuation ring $R$ with field of fractions $K(X)$ centered on $X$, there exists a valuation ring with field of fractions $K(Y_i)$ for some $i$ which is centered on $Y_i$ and dominates $R$.
\item We say an $h$-covering $Y=\coprod_iY_i\to X$ is {\it generically purely inseparable} if the field extensions $K(Y_i)/K(X)$ are purely inseparable. \end{enumerate} \end{defn}
\begin{thm}[{\cite[Corollary 1.3.3]{Temkin}}]\label{temkin} Let $X$ be an integral scheme of finite type over a field $k$. Then there exists a generically purely inseparable $h$-covering $Y\to X$ with $Y$ regular. \end{thm}
\begin{lem}\label{h descent} Let $S$ be an excellent noetherian scheme characteristic $p>0$, $U$ an $S$-scheme separated of finite type, and $X$ a compactification of $U$ over $S$. Let $\Lambda$ and $\Lambda'$ be finite fields of characteristic distinct from $p$. Let $\mathcal F$ and $\mathcal F'$ be constructible complex of $\Lambda$-modules and $\Lambda'$-modules on $U_{\text{\'et}}$ respectively such that $\mathcal H^q(\mathcal F)$ and $\mathcal H^q(\mathcal F')$ are locally constant for every $q$. Let $f:Y=\coprod_{i=1}^mY_i\to X$ be a generically purely inseparable $h$-covering. Then the following are equivalent; \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$, \item the pullbacks $f_U^*\mathcal F$ and $f_U^*\mathcal F'$ have the same wild ramification over $S$, where $f_U:Y\times_XU\to U$ is the base change of $f$. \end{enumerate} \end{lem}
\begin{proof} Since having the same wild ramification is preserved by pullback (Lemma \ref{list swr}.1), the implication $1\Rightarrow2$ follows. We prove the converse $2\Rightarrow 1$. We assume that the pullbacks $f_U^*\mathcal F$ and $f_U^*\mathcal F'$ have the same wild ramification over $S$. We show that $\mathcal F$ and $\mathcal F'$ satisfy the condition 3 in Lemma \ref{val criterion}. Let \be\begin{xy}\xymatrix{
U\ar[d]&\mathop{\mathrm{Spec}}\nolimits F\ar[l]\ar[d]\\ S&\mathop{\mathrm{Spec}}\nolimits\mathcal O_F,\ar[l] }\end{xy}\ee be a diagram with $\mathcal O_F$ a strictly henselian valuation ring and $F$ the field of fractions of $\mathcal O_F$. We assume that the image of $\mathop{\mathrm{Spec}}\nolimits F\to U$ is the generic point.
By the valuative criterion of proper morphisms \cite[Th\'eor\`eme 7.3.8]{ega2}, there exists a unique $S$-morphism $\mathop{\mathrm{Spec}}\nolimits\mathcal O_F\to X$ extending $U\to \mathop{\mathrm{Spec}}\nolimits F$. Then, the subring $R=K(X)\cap \mathcal O_F$ of $K(X)$ is a valuation ring with field of fractions $K(X)$ and is centered on $X$. By the definition of $h$-coverings, there exists a valuation ring $R'$ of $K(Y_i)$ for some $i$ which is centered on $Y_i$ and dominates $R$. We denote the residue field of $F$ (resp.\ $R$) by $\kappa_F$ (resp.\ $\kappa_R$). We may assume that $\mathcal O_F$ dominates $R$ and that $\mathcal O_F$ is the strict henselization of the valuation ring $R$ along the inclusion $\kappa_R\to\kappa_F$. We take an algebraic closure $\kappa_F^{\rm alg}$ of $\kappa_F$ and an embedding of the residue field $\kappa_{R'}$ of $R'$ into $\kappa_F^{\rm alg}$. Let $\mathcal O_E$ be the strict henselization of $R'$ along the embedding $\kappa_{R'}\to\kappa_F^{\rm alg}$ and $E$ be its field of fractions. Since $E$ is purely inseparable over $F$, the map $G_E\to G_F$ of the absolute Galois group is bijective. Thus, the assertion follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{swr sc var}] By devissage, we may assume that $U$ is regular and connected and that $\mathcal H^q(\mathcal F)$ and $\mathcal H^q(\mathcal F')$ are locally constant for every $q$. We take a $G$-torsor $V\to U$ for some finite group $G$
such that the pullbacks $\mathcal H^q(\mathcal F)|_V$ and $\mathcal H^q(\mathcal F')|_V$ are constant for every $q$.
By Lemma \ref{list swr}.2, we may assume that $G\cong\mathbb Z/p^e\mathbb Z$ for some $e\ge0$.
We take a compactification $X$ of $U$ over $S$. By Theorem \ref{temkin}, we can take a generically purely inseparable $h$-covering $Y\to X$ with $Y$ regular, and hence, by Lemma \ref{h descent}, we may assume that $X$ is regular. Then the assertion follows from Lemma \ref{reg case}. \end{proof}
\section{Wild ramification and nearby cycle complex} \label{swr section} We deduce one of our main theorems in Theorem \ref{rpsi have swr} from the intertwining formula for $\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits$ (Lemma \ref{int trd}) and wildness of the terms contributing the formula (Corollary \ref{str hens}).
\begin{lem}\label{int trd} We use the same notations as in Lemma \ref{intertwine}: \StraitXlocal. \UopenVgaloisLambda. Let $\mathcal G$ be a constructible complex of $\Lambda$-modules on $U_{\text{\'et}}$
such that each $\mathcal H^q(\mathcal G|_V)$ is constant. We define a virtual representation $M$ of $G$ by $ M=\sum_q(-1)^q\Gamma(V,\mathcal H^q(\mathcal G)) $. Then, for every $\sigma\in P_K$, we have an equality \begin{eqnarray}\label{int trd eq} \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,(R\psi j_!\mathcal G)_x)
=\frac{1}{|G|}\sum_{g\in G} \mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x) \cdot \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M). \end{eqnarray} \end{lem}
\begin{proof} Since $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x)$ is an integer by Proposition \ref{general formulation}, the assertion follows from Lemma \ref{intertwine}. \end{proof}
\begin{thm} \label{rpsi have swr} \StraitXlocal. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $X_\eta$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible on $S$. We assume that $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $X$. Then the stalks $(R\psi\mathcal F)_x$ and $(R\psi\mathcal F')_x$ of the nearby cycle complexes have the same wild ramification, that is, for every $\sigma\in P_K$, we have \begin{eqnarray*} \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,(R\psi\mathcal F)_x) = \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(\sigma,(R\psi\mathcal F')_x). \end{eqnarray*} \end{thm}
\begin{proof}
By devissage using the induction on $\dim X_\eta$, we may assume that $\mathcal F$ and $\mathcal F'$ are of the form $\mathcal F\simeq j_!\mathcal G$ and $\mathcal F'\simeq j_!\mathcal G'$ for some dense open immersion $j:U\to X_\eta$ with $U$ being normal and connected and for some constructible complexes $\mathcal G$ and $\mathcal G'$ on $U$ such that $\mathcal H^q(\mathcal G)$ and $\mathcal H^q(\mathcal G')$ are locally constant.
Then the intertwining formula (\ref{int trd eq}) for $\mathcal G$ (resp.\ $\mathcal G'$) holds. In the following we use the notation in Lemma \ref{int trd}. By the assumption for $\mathcal G$ and $\mathcal G'$ having the same wild ramification,
we can find a normal compactification $X'$ of $U$ over $X$ such that for every $g\in G$ wildly ramified on $X'$, we have $ \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M) = \mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M'). $
Here, if $\mathop{\mathrm{Tr}}\nolimits((g,\sigma),(R\psi(j_!h_{*}\qq_\ell))_x)$ is nonzero, then $g$ is wildly ramified on $X'$
by Corollary \ref{str hens}. Thus, the assertion follows from the intertwining formulas for $\mathcal G$ and $\mathcal G'$.
\end{proof}
We can consider the following variant of Theorem \ref{rpsi have swr}; \begin{conj}\label{psi sc}
Under the notation in Theorem \ref{rpsi have swr}, we assume that the residue field of $S$ is algebraically closed and that $\mathcal F$ and $\mathcal F'$ have universally the same conductors over $X$. Then the stalks $(R\psi\mathcal F)_x$ and $(R\psi\mathcal F')_x$ of the nearby cycle complexes have the same wild ramification in the sense in Theorem \ref{rpsi have swr}.
\end{conj}
\begin{cor}\label{psi sc curve} Conjecture \ref{psi sc} holds if one of the following is satisfied; \begin{enumerate} \item $\dim X\le2$, \item $S$ is the strict localization of a smooth curve over an algebraically closed field at a closed point. \end{enumerate} \end{cor} \begin{proof} The case where the condition 1 (resp. 2) is satisfied follows from Theorem \ref{swr sc} (resp.\ Theorem \ref{swr sc var}) and Theorem \ref{rpsi have swr}. \end{proof}
\begin{rem} \label{unreasonable} One may think that it is more natural to define ``having the same wild ramification'' without taking blowup, that is, in the notations of the proof of Theorem \ref{rpsi have swr},
to require that $\mathcal G$ and $\mathcal G'$ satisfy the following property; for every $g\in G$ which is wildly ramified on $X$, we have $\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M)=\mathop{\mathrm{Tr}^{\mathrm{Br}}_\Q}\nolimits(g,M')$. But, this definition is unreasonably strong. Firstly, this naive definition is stronger than our definition, because, for a compactification $X'$ of $U$ over $X$, if $g\in G$ is ramified on $X'$, then it is ramified on $X$, but the converse does not necessarily hold. Further, there exists two sheaves which should obviously have the same wild ramification, but do not in the naive sense (see Section \ref{ex}). \end{rem}
\section{Wild ramification and characteristic cycle} \label{cor} We briefly recall the definition of the characteristic cycle of a constructible \'etale sheaf due to Saito.
Let $X$ be a smooth variety over a perfect field $k$ of characteristic $p$ and $\mathcal F$ be a constructible complex of $\Lambda$-modules on $X_{\text{\'et}}$ for a finite field $\Lambda$ of characteristic distinct from $p$. Beilinson defined a closed conical subset $\SS(\mathcal F)$ of $T^*X$, called the singular support of $\mathcal F$, and prove that if $X$ is pure of dimension $n$, then so is $\SS(\mathcal F)$ (\cite{B}). Here, for a subset $C$ of a vector bundle $V$ on a scheme, we say $C$ is {\it conical} if it is stable under the action of the multiplicative group $\mathbb G_m$ on $V$.
Let $f:X\to Y$ be a morphism of schemes with $Y$ a regular noetherian scheme of dimension $1$ and $x$ a geometric point of $X$ with image $y$ in $Y$ lying over a closed point of $Y$. Let $\mathcal F$ be a complex on $X_{\text{\'et}}$. We denote by $R\psi_x(\mathcal F,f)$ (resp.\ $R\phi_x(\mathcal F,f)$) the stalk $(R\psi\mathcal F)_x$ (resp.\ $(R\phi\mathcal F)_x$) of the nearby cycle complex $R\psi\mathcal F$ (resp.\ vanishing cycle complex $R\phi\mathcal F$) with respect to the morphism $X\times_YY_{(y)}\to Y_{(y)}$.
The following theorem with $\mathbb Z$-linear replaced by $\mathbb Z[1/p]$-linear is proved by Saito in \cite[Theorem 5.9]{S} and the integrality of the coefficients is proved by Beilinson \cite[Theorem 5.18]{S}. \begin{thm}[{\cite[Theorem 5.9, 5.18]{S}}]\label{cc} Let $X$ be a smooth variety of pure of dimension $n$ over a perfect field $k$. Let $\Lambda$ be a finite field of characteristic invertible on $k$ and $\mathcal F$ be a constructible complex of $\Lambda$-modules on $X_{\text{\'et}}$. We write $\SS(\mathcal F)$ as the union of its irreducible components; $\SS(\mathcal F)=\bigcup_aC_a$. We take a closed conical subset $C$ of $T^*X$ which is pure of dimension $n$ and contains $\SS(\mathcal F)$. Then there exists a unique $\mathbb Z$-linear combination $A=\sum_am_aC_a$ satisfying the following property: Let $j:W\to X$ be an \'etale morphism, $f:W\to Y$ be a morphism to a smooth curve $Y$, and $u\in W$ be an at most isolated $C$-characteristic point of $f$. Then we have an equality \begin{eqnarray*} -\mathop{\dim\mathrm{tot}}\nolimits_yR\phi_u(j^*\mathcal F,f) = (A,df)_{T^*W,u}, \end{eqnarray*} where the right hand side is the intersection multiplicity at the point over $u$. \end{thm} See \cite[Definition 5.3]{S} for the definitions of at most isolated $C$-characteristic points and the intersection multiplicity. We call the linear combination $A$ in the theorem the characteristic cycle of $\mathcal F$ and denote it by $\mathop{CC}\nolimits(\mathcal F)$.
\begin{thm}[{c.f.\ \cite[Theorem 0.1]{SY}}] \label{swr implies same cc} Let $X$ be a smooth variety over a perfect field $k$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $X$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible in $k$. We take a geometric point $x$ of $X$. We assume
that $\mathcal F|_{X_{(x)}}$ and $\mathcal F'|_{X_{(x)}}$ have the same wild ramification over $X_{(x)}$. Then, there exists an open neighborhood $U$ of the underlying point of $x$
such that the characteristic cycles of $\mathcal F|_U$ and $\mathcal F'|_U$ are the same:
$CC(\mathcal F|_U)=CC(\mathcal F'|_U)$. \end{thm}
\begin{proof} We may assume $X$ is equidimensional and $k$ is algebraically closed. Let $n=\dim X$. Note that, by a limit argument, the assumption implies that there exists an \'etale neighborhood $X'$ of $x$ such that
$\mathcal F|_{X'}$ and $\mathcal F'|_{X'}$ have the same wild ramification over $X'$. Since the problem is \'etale local, it suffices to show, under the assumption that $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $X$, that the characteristic cycles of $\mathcal F$ and $\mathcal F'$ are the same.
We take a closed conical subset $C$ of the cotangent bundle $T^*X$ which is purely of dimension $n$ and contains $\SS(\mathcal F)\cup\SS(\mathcal F')$.
By the definition of the characteristic cycle it suffices to show that for every \'etale morphism $j:W\to X$, every morphism $f:W\to Y$ to a smooth curve $Y$, and every at most isolated $C$-characteristic point $u\in W$ of $f$, we have \begin{eqnarray*} \mathop{\mathrm{dimtot}}\nolimits R\phi_u(j^*\mathcal F,f) = \mathop{\mathrm{dimtot}}\nolimits R\phi_u(j^*\mathcal F',f). \end{eqnarray*} We have a distinguished triangle \begin{eqnarray*} (j^*\mathcal F)_{u}\to R\psi_u(j^*\mathcal F,f)\to R\phi_u(j^*\mathcal F,f)\to \end{eqnarray*} and similar one for $\mathcal F'$. We note that $(j^*\mathcal F)_u$ and $(j^*\mathcal F')_u$ have the same rank by assumption and the inertia group acts trivially on them. Thus, by Theorem \ref{rpsi have swr}, $R\phi_u(j^*\mathcal F,f)$ and $R\phi_u(j^*\mathcal F',f)$ have the same wild ramification and thus we obtain the above equality of the total dimensions. \end{proof} Corollary \ref{swr implies same cc} immediately implies the following description of the characteristic cycles of tame sheaves: \begin{cor}\label{cc of tame} Let $X$ be a smooth variety over a perfect field $k$ and $D$ be a divisor with simple normal crossings. Let $\mathcal F$ be a locally constant constructible complex of $\Lambda$-modules on the complement $U=X\setminus D$. We assume that $\mathcal F$ is tamely ramified on $D$. We write $D$ as the union of irreducible components: $D=\bigcup_{i=1}^nD_i$. Then, we have \begin{eqnarray*} CC\mathcal F=\mathop{\mathrm{rk}}\nolimits\mathcal F\cdot\sum_{I\subset\{1,\ldots,n\}}T^*_{D_I}X, \end{eqnarray*} where $D_I$ is the intersection $\bigcap_{i\in I}D_i$ in $X$ and $T^*_{D_I}X$ is the conormal bundle of $D_I$ in $X$. \end{cor}
\begin{rem}\label{pf of cc of tame} Corollary \ref{cc of tame} has been already proved by Saito and Yang in different ways. Saito proved it in \cite[Theorem 7.14]{S} using the explicit description \cite[Proposition 6]{S1} of vanishing cycles. Yang proved the equicharacteristic case of the logarithmic version of the Milnor formula (\cite[Corollary 4.2]{Yan}), which is equivalent to Corollary \ref{cc of tame}. He proved it by deforming the variety so that we can use a global result of Vidal \cite[Corollaire 3.4]{V}. But, we can avoid the deformation argument, using our local result Theorem \ref{rpsi have swr} instead of Vidal's result. This gives a simpler proof of the logarithmic version of the Milnor formula in the equicharacteristic case. \end{rem}
We give a variant, due to Takeshi Saito, on a relation with restrictions to curves: \begin{cor}[{Saito, c.f. \cite[Corollary 4.7]{K}}] \label{curve} Let $X$ be a smooth variety over a perfect field $k$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules on $X$. We take a geometric point $x$ over a closed point $x_0$ of $X$. We assume that
for every morphism $g:C\to X$ from a smooth curve $C$ and for every geometric point $v$ of $C$ lying above $x$, we have an equality of the Artin conductors; $a_v(g^*\mathcal F)=a_v(g^*\mathcal F')$. Then, there exists an open neighborhood $U$ of $x_0$ such that
$CC(\mathcal F|_U)=CC(\mathcal F'|_U)$. \end{cor} \begin{proof} This follows from Theorem \ref{swr implies same cc} and Theorem \ref{swr sc var}. \end{proof}
\section{Another proof of Corollary \ref{curve} after T.\ Saito}\label{another pf} We give a proof of Corollary \ref{curve} which the author learned from Takeshi Saito before the author obtained Theorem \ref{swr sc var}. His proof relies on Theorem \ref{rpsi have swr} and Theorem \ref{swr sc}, but not on Theorem \ref{swr sc var}.
He established Corollary \ref{psi sc curve} in the case of isolated characteristic point with respect to the singular supports of given sheaves (Proposition \ref{iso sing}), which is sufficient to prove Corollary \ref{curve}. This special case of Corollary \ref{psi sc curve} is reduced to the case of surfaces using the theory of nearby cycle complexes over a general base.
In Subsection \ref{subsec nearby}, we recall the theory of nearby cycle complexes over a general base. After a preliminary subsection on ``transversality'' (Subsection \ref{subsec tr}), we prove Theorem \ref{curve} in Subsection \ref{subsec pf}.
\subsection{Nearby cycle over a general base}\label{subsec nearby} We recall the theory of nearby cycle complexes over a general base, for which we mainly refer to \cite[Section 1]{thom}.
Let $f:X\to Y$ be a morphism of schemes. For the definition of the topos $X\overleftarrow\times_YY$, called the vanishing topos, and the natural morphism $\Psi_f:X\to X\overleftarrow\times_YY$ of topoi, see \cite[Section 1]{thom}.
For a topos $\mathcal X$ and a ring $\Lambda$, let $D^+(\mathcal X,\Lambda)$ denote the derived category of the complexes of sheaves of $\Lambda$-modules on $\mathcal X$. The nearby cycle functor $R\Psi_f:D^+(X,\Lambda)\to D^+(X\overleftarrow\times_YY,\Lambda)$ is defined to be the functor induced by $\Psi_f$.
Let $x$ be a geometric point of $X$ and $y$ be the image in $Y$. Then we have a canonical identification
$x\overleftarrow\times_YY\cong Y_{(y)}$
of topoi (\cite[1.11.1]{thom}). This induces a morphism of topoi; \begin{eqnarray}\label{local section} \sigma_x:Y_{(y)}\cong x\overleftarrow\times_YY\to X\overleftarrow\times_YY. \end{eqnarray} We recall the following description of the nearby cycle complex $R\Psi_f\mathcal F$. \begin{lem}[{\cite[1.12.6]{thom}}] \label{stalk} Let $f_{(x)}$ denote the morphism $X_{(x)}\to Y_{(y)}$ induced by $f$. For $\mathcal F\in D^+(X,\Lambda)$, we have a canonical isomorphism \begin{eqnarray*}
\sigma_x^*(R\Psi_f\mathcal F)\cong Rf_{(x)*}(\mathcal F|_{X_{(x)}}). \end{eqnarray*}
\end{lem}
Let \begin{eqnarray}\label{car} \begin{xy}\xymatrix{X'\ar[d]_{f'}\ar[r]^{g'}&X\ar[d]^f\\Y'\ar[r]_g&Y}\end{xy}\end{eqnarray} be a commutative diagram. We have a natural morphism \begin{eqnarray*} g'\overleftarrow\times_gg:X'\overleftarrow\times_{Y'}Y'\to X\overleftarrow\times_YY \end{eqnarray*} of topoi (\cite[1.4]{thom}) and, for a bounded below complex $\mathcal F$ on $X_{\text{\'et}}$, the base change morphism \begin{eqnarray}\label{bc} (g'\overleftarrow\times_gg)^*R\Psi_f\mathcal F\to R\Psi_{f'}({g'}^*\mathcal F). \end{eqnarray}
\begin{defn} Let $f:X\to Y$ be a morphism of schemes, $\Lambda$ be a ring, and $\mathcal F\in D^+(X,\Lambda)$. We say
{\it the formation of $R\Psi_f\mathcal F$ commutes with base change} if for any Cartesian diagram (\ref{car}), the base change morphism (\ref{bc}) is an isomorphism. \end{defn}
\begin{rem}\label{const} We give a remark on constructibility of the nearby cycle complex $R\Psi_f\mathcal F$. For the definition of constructibility for a sheaf on the vanishing topos $X\overleftarrow\times_YY$, see \cite[1.6]{thom} or \cite[8.1]{Or}. Let us just mention that, if we have a constructible sheaf $\mathcal K$ on $X\overleftarrow\times_YY$, then for every geometric point $x$ of $X$ the sheaf $\sigma_x^*\mathcal K$ on $Y_{(y)}$ is constructible in the usual sense.
Assume that $X$ and $Y$ are noetherian schemes and that $f$ is a morphism of finite type. Let $\mathcal F$ be a constructible complex of $\Lambda$-modules on $X_{\text{\'et}}$, for a finite field $\Lambda$ of characteristic invertible on $Y$. We note that the nearby cycle complex $R\Psi_f\mathcal F$, and even $\sigma_x^*R\Psi_f\mathcal F$, may not be constructible (see \cite[Section 11]{Or} for such an example).
But if the formation of $R\Psi_f\mathcal F$ commutes with base change, then $R\Psi_f\mathcal F$ is constructible (\cite[8.1 and 10.5]{Or}), and in particular, for every geometric point $x$ of $X$, the pullback of $\sigma_x^*R\Psi_f\mathcal F$ is a constructible complex on $Y_{(y)}$.
\end{rem}
\begin{lem}\label{bc stalk} Let \be\begin{xy}\xymatrix{X'\ar[d]_{f'}\ar[r]^{g'}&X\ar[d]^f\\Y'\ar[r]_g&Y}\end{xy}\ee be a Cartesian diagram of schemes with $f$ being of finite type. Let $\mathcal F\in D^+(X,\Lambda)$ be a complex and $x$ be a geometric point of $X$. We assume that the formation of $R\Psi_f\mathcal F$ commutes with base change. Then the canonical morphism \begin{eqnarray*} g_{(y')}^*Rf_{(x)*}\mathcal F\to Rf'_{(x')*}{g'}^*\mathcal F. \end{eqnarray*} is an isomorphism. \end{lem} \begin{proof} By the naturality (\cite[1.12]{thom}) of the isomorphism in Lemma \ref{stalk}, we obtain a commutative diagram \be\begin{xy}\xymatrix{ g_{(y')}^*Rf_{(x)*}\mathcal F\ar[rr]^\alpha\ar[d]_{\cong}&&Rf'_{(x')*}{g'}^*\mathcal F\ar[d]^{\cong}\\ g_{(y')}^*\sigma_x^*R\Psi_f\mathcal F\ar@{=}[r]& \sigma_{x'}^*(g'\overleftarrow\times_gg)^*R\Psi_f\mathcal F\ar[r]^\beta& \sigma_{x'}^*R\Psi_{f'}({g'}^*\mathcal F),}\end{xy}\ee where $\alpha$ is the morphism in the assertion and $\beta$ is the base change map as in (\ref{bc}), which is an isomorphism by the assumption that the formation of $R\Psi_f\mathcal F$ commutes with base change.
\end{proof}
\begin{prop}[{\cite[Proposition 6.1]{Or}}]\label{orgogozo} Let $X$ and $Y$ be northerian schemes, $f:X\to Y$ be a morphism of finite type, and $\mathcal F$ be a constructible complex of $\Lambda$-modules on $X$, for a finite field $\Lambda$ of characteristic invertible on $Y$. We assume that there exists a closed subscheme $Z\subset X$ quasi-finite over $Y$
such that the restriction $f|_{X\setminus Z}:X\setminus Z\to Y$ of $f$ is universally locally acyclic relatively to $\mathcal F$. Then, the formation of $R\Psi_f\mathcal F$ commutes with base change. \end{prop}
\subsection{Complements on transversality}\label{subsec tr} As mentioned in the beginning of this section, a special case of Conjecture \ref{psi sc} will be proved in Proposition \ref{iso sing} in the next subsection. More concretely, the conjecture will be proved in the case where the morphism $f:X\to S$ comes from a function on a smooth variety which is ``transversal'' outside isolated points. This assumption on $f$ will be used to take a morphism to a surface which is good with respect to the formation of the nearby cycle complex. In this subsection we show the existence of such a morphism (Lemma \ref{decomp}).
We begin with recalling the definition of transversality. \begin{defn}[{\cite[1.2]{B}}] \label{tr} Let $f:X\to Y$ be a morphism of smooth schemes over a field $k$ and $C$ be a closed conical subset of the cotangent bundle $T^*X$ of $X$. \begin{enumerate} \item Let $x$ be a point of $X$ with image $y\in Y$. We say $f$ is {\it $C$-transversal} at $x$ if we have $df_x^{-1}(C\times_Xx)\subset \{0\}$, where $df_x$ denotes the $k(x)$-linear map $T^*_yY\otimes_{k(y)}k(x)\to T^*_xX$ defined by pullback of differential forms.
We can consider the subset of $X$ consisting of points at which $f$ is not $C$-transversal. It is a closed subset of $X$ (\cite[1.2]{B}), which we call the $C$-{\it characteristic locus }of $f$. We often regard it as a reduced closed subscheme of $X$. \item We say $f$ is {\it $C$-transversal} if $f$ is $C$-transversal at every point of $X$. \end{enumerate} \end{defn}
\begin{defn}[{\cite[1.2]{B}}] Let $h:W\to X$ be a morphism of smooth schemes over a perfect field $k$ and $C$ be a closed conical subset of the cotangent bundle $T^*X$ of $X$. \begin{enumerate} \item We denote by $h^*C$ the pullback $W\times_XC\subset W\times_XT^*X$ and by $K$ the kernel of the morphism $dh:W\times_XT^*X\to T^*W$ of vector bundles defined by pullback of differential forms. Let $x$ be a point of $X$. We say $h$ is {\it $C$-transversal }at $x$ if we have $(h^*C\cap K)\times_Xx\subset \{0\}$. \item We say $h$ is {\it $C$-transversal }at $x$ if $h$ is $C$-transversal at every point of $X$. \item We assume that $h$ is $C$-transversal. We define a conical subset $h^\circ C\subset T^*W$ to be the image of $h^*C$ by the morphism $dh:W\times_XT^*X\to T^*W$, which is a closed subset of $T^*W$ by \cite[1.2]{B}. \end{enumerate} \end{defn}
\begin{lem}[{\cite[Lemma 3.9.2]{S}}]\label{bc tr} Let \be\begin{xy}\xymatrix{X\ar[d]_f&W\ar[d]^g\ar[l]_h\\ Y&Z\ar[l] }\end{xy}\ee be a Cartesian diagram of smooth schemes with $f$ smooth. Let $C\subset T^*X$ be a closed conical subset. Assume that $f$ is $C$-transversal. Then $h$ is $C$-transversal and $g$ is $h^\circ C$-transversal. \end{lem}
We have the following fiberwise criterion of $C$-transversality. \begin{lem}\label{fiberwise} Let $f:X\to Y$ and $g:Y\to Z$ be smooth morphisms of smooth schemes over a field $k$ and $C$ be a conical closed subset of $T^*X$. Let $x$ be a closed point of $X$ with image $z\in Z$. We assume that $g\circ f$ is $C$-transversal at $x$. Then the following hold. \begin{enumerate} \item
The closed immersion $i:X_z\to X$ is $C$-transversal at $x$. \item The following are equivalent; \begin{enumerate} \item $f$ is $C$-transversal at $x$, \item the base change $f_z:X_z\to Y_z$ of $f$ by the closed immersion $z\to Z$ is $i^\circ C$-transversal at $x$.
\end{enumerate} \end{enumerate} \end{lem}
\begin{proof} The assertion 1 and the implication (a)$\Rightarrow$(b) in the assertion 2 follow from Lemma \ref{bc tr}. The converse (b)$\Rightarrow$(a) follows from a diagram chasing on the commutative diagram of $k(x)$-vector spaces \be\begin{xy}\xymatrix{ 0\ar[r]&T^*_zZ\otimes_{k(z)}k(x)\ar[r]\ar@{=}[d]&T^*_xX\ar[r]&T^*_xX_z\ar[r]&0\\ 0\ar[r]&T^*_zZ\otimes_{k(z)}k(x)\ar[r]&T^*_yY\otimes_{k(y)}k(x)\ar[u]\ar[r]&T^*_yY_z\otimes_{k(y)}k(x)\ar[u]\ar[r]&0}\end{xy}\ee with horizontal sequences being exact, where $y$ denotes the image of $x$ in $Y$.
\end{proof}
The key step in the proof of Lemma \ref{decomp} is to find a good function on a fiber. It is achieved by taking the morphism defined by a Lefschetz pencil which is enough general. We recall the definition and a lemma on $C$-transversality of morphisms defined by Lefschetz pencils (Lemma \ref{Legendre}).
Let $k$ be a field and $\mathbb P=\mathbb P(E^\vee)=\mathop{\mathrm{Proj}}\nolimits_kS^\bullet E$ be the projective space for a finitely dimensional $k$-vector space $E$. Let $\mathbb P^\vee=\mathbb P(E)$ be the dual projective space, which parameterizes hyperplanes in $\mathbb P$.
Let $Q\subset \mathbb P\times_k\mathbb P^\vee$ be the universal family of hyperplanes in $\mathbb P$,
which parameterizes pairs $(x,H)$ of a point $x$ of $\mathbb P$ and a hyperplane $H$ in $\mathbb P$ with $x\in H$.
Recall that we have canonical identifications $\mathbb P(T^*\mathbb P)\cong Q\cong\mathbb P(T^*\mathbb P^\vee)$ which are compatible with the projection to $\mathbb P$ and the projection to $\mathbb P^\vee$ respectively. These identifications are called the Legendre transform identifications \cite[1.5]{B}.
Let $L\subset \mathbb P^\vee$ be a Lefschetz pencil of hyperplanes in $\mathbb P$, that is, a line in $\mathbb P^\vee$. We define a morphism $p_L:\mathbb P_L\to L$ by the following Cartesian diagram \be\begin{xy}\xymatrix{ Q\ar[d]&\mathbb P_L\ar[l]\ar[d]^{p_L}\\ \mathbb P^\vee&L.\ar[l] }\end{xy}\ee This $p_L$ is called the morphism defined by the Lefschetz pencil $L$. For a line $L\subset \mathbb P^\vee$, let $A_L\subset\mathbb P$ denote the axis of $L$, i.e, the intersection $\bigcap_{t\in L}H_t$ of hyperplanes belonging to $L$
Since the canonical morphism $\mathbb P_L\to \mathbb P$ is an isomorphism over $\mathbb P\setminus A_L$, we can regard $\mathbb P\setminus A_L$ as an open subscheme of $\mathbb P_L$ and we get an induced morphism $p_L^\circ:\mathbb P\setminus A_L\to L$.
We recall that $C$-transverality of morphisms defined by Lefschetz pencils can be characterized using the Legendre transform identification $\mathbb P(T^*\mathbb P)\cong Q$: \begin{lem}[{\cite[Lemma 2.1]{SY}}]\label{Legendre} Let $x\in\mathbb P(k)$ be a rational point. Let $C\subset T^*_x\mathbb P$ be a conical closed subset. We regard $\mathbb P(C)$ as a closed subset of $\mathbb P^\vee$ via the identification $\mathbb P(T^*_x\mathbb P)\cong Q\times_\mathbb P x\subset \mathbb P^\vee$. Then, for a line $L\subset\mathbb P^\vee$ with the axis not containing $x$, the morphism $p_L^\circ:\mathbb P\setminus A_L\to L$ is $C$-transversal (at $x$) if and only if $\mathbb P(C)\cap L=\emptyset$. \end{lem}
We use the following elementary lemma in the proof of Lemma \ref{function} below. \begin{lem}\label{proj geom} Let $\mathbb P=\mathbb P(E^\vee)$ be the projective space for a $k$-vector space $E$ and $\mathbb P^\vee=\mathbb P(E)$ its dual. Let $\mathbb G=\mathrm{Gr}(1,\mathbb P^\vee)$ be the Grassmanian variety parameterizing lines in $\mathbb P^\vee=\mathbb P(E)$.
Let $B\subset \mathbb P$ be a subset containing at least two closed points. Then lines $L\subset\mathbb P^\vee$ such that any hyperplane $H\subset\mathbb P$ belonging to $L$ does not contain $B$ form a dense open subset of $\mathbb G$.
\end{lem} \begin{proof}
By replacing $B$ by the minimum linear subspace containing $B$, we may assume that $B$ is a linear subspace of $\mathbb P$ of dimension $d\ge1$. Note that a line $L\subset \mathbb P^\vee$ satisfies the condition in the assertion if and only if $L$ does not meet the dual subspace $B^\vee\subset\mathbb P^\vee$ of $B$. Since $B^\vee$ is of codimension $d+1\ge2$, the assertion follows.
\end{proof}
\begin{lem}\label{function} Let $X$ be a smooth scheme over an algebraically closed field $k$ purely of dimension $d$ and $C$ be a conical closed subset of the cotangent bundle $T^*X$ of dimension $\le d+1$. Then, locally on $X$, there exists a smooth $k$-morphism $g:X\to \mathbb A^1_k$ whose $C$-characteristic locus is quasi-finite over $\mathbb A^1_k$. \end{lem} \begin{proof} Since the problem is local, we may assume that $X$ is affine. We take a closed immersion $X\to \mathbb A^n_k$ and denote the composite $X\to \mathbb A^n_k\subset\mathbb P^n_k$ by $i$. By replacing $X$ by $\mathbb P^n_k$ and $C$ by the closure of the image of $di^{-1}(C)\subset X\times_{\mathbb P}T^*\mathbb P$ by $X\times_{\mathbb P}T^*\mathbb P\to T^*\mathbb P$, we may assume that $X=\mathbb P=\mathbb P(E^\vee)$ for a finitely dimensional $k$-vector space $E$. Let $\mathbb P^\vee$ be the dual projective space and $\mathbb G=\mathrm{Gr}(1,\mathbb P^\vee)$ be the Grassmanian variety parameterizing lines in $\mathbb P^\vee$.
Let $x$ be a closed point of $X=\mathbb P$. Let $U_1$ be the dense open subset of $\mathbb G$ consisting of lines $L\subset\mathbb P^\vee$ with the axis $A_L\subset\mathbb P$ not containing $x$.
We write $C$ as the union of its irreducible components; $C=\bigcup_{a\in A}C_a$. For each $a\in A$, let $B_a\subset \mathbb P$ be the base of the conical closed subset $C_a$, i.e, $B_a=s^{-1}(C_a)$, where $s$ is the zero section $\mathbb P\to T^*\mathbb P$.
Let $A_1\subset A$ be the subset consisting of irreducible components $C_a$ of $C$ such that the fiber $C_a\times_{\mathbb P}x$ is not the whole cotangent space $T^*_x\mathbb P$. Let $C'$ be the union $\bigcup_{a\in A_1}C_a$ and $C'_x\subset T^*_x\mathbb P$ be its fiber $C'\times_{\mathbb P}x$ at $x$. Let $\mathbb P(C'_x)\subset \mathbb P(T^*_x\mathbb P)$ be the projectivization of $C'_x$. We regard $\mathbb P(C'_x)$ as a closed subvariety of $\mathbb P^\vee$ via the identification $\mathbb P(T^*_x\mathbb P)\cong Q\times_\mathbb P x\subset \mathbb P^\vee$. Since it is of codimension $\ge2$ in $\mathbb P^\vee$,
lines $L\subset\mathbb P^\vee$ which do not meet $\mathbb P(C'_x)$ form a dense open subset $U_2\subset \mathbb G$.
Let $A_2\subset A$ be the subset consisting of irreducible components $C_a$ of $C$ such that the fiber $C_a\times_{\mathbb P}x$ is the whole cotangent space $T^*_x\mathbb P$ and that the base $B_a$ is of dimension $1$. Note that if $C_a\times_{\mathbb P}x=T^*_x\mathbb P$, then $\dim B_a\le1$ by the assumption that $\dim C\le d+1$. Applying Lemma \ref{proj geom} to $B_a$ for each $a\in A_2$, we get a dense open subset $U_a\subset \mathbb G$ consisting of lines $L\subset\mathbb P^\vee$ such that any hyperplane $H\subset\mathbb P$ belonging to $L$ does not contain $B_a$. Let $U_3$ be the intersection $\bigcap_{a\in A_2}U_a$.
Let $L$ be a closed point of the intersection $U_1\cap U_2\cap U_3$. We claim that the morphism $p_L^\circ:\mathbb P\setminus A_L\to L$ defined by $L$ produces a function with the desired properties, that is, if we choose a dense open immersion $\mathbb A^1\to L$ with $0\mapsto p_L(x)$, then the base change $(\mathbb P\setminus A_L)\times_L\mathbb A^1\to \mathbb A^1$ satisfies the desired properties on a neighborhood of $x$.
By Lemma \ref{Legendre}, the morphism $p_L$ is $C'=\bigcup_{a\in A_1}C_a$-transversal at $x$. Thus, on a neighborhood of $x$, the morphism $p_L$ is $C$-transversal outside $\bigcup_{a\in A\setminus A_1}B_a$. Recall that the fiber of $B_a\cap\mathbb P_L\to L$ over $H\in L$ is the intersection $B_a\cap H$ and that any hyperplane $H$ belonging to $L$ does not contain $B_a$ for $a\in A_2$. Further, $B_a$ is of dimension $0$ for $a\in A\setminus(A_1\cup A_2)$. Thus, the union $\bigcup_{a\in A\setminus A_1}B_a$ is quasi-finite over $L$, which concludes the proof. \end{proof}
\begin{lem}\label{decomp} Let $f:X\to Y$ be a smooth morphism of smooth schemes over an algebraically closed field $k$ with $X$ and $Y$ equidimensinal of dimension $n\ge2$ and $1$ respectively. Let $C$ be a conical closed subset of the cotangent bundle $T^*X$ which is equidimensional of dimension $n$. We assume that the $C$-characteristic locus $Z\subset X$ of $f$ is quasi-finite over $Y$. Then, locally on $X$, there exists a smooth $Y$-morphism $g:X\to \mathbb A^1_Y$ whose $C$-characteristic locus is quasi-finite over $\mathbb A^1_Y$. \end{lem} \begin{proof} Let $x$ be a closed point of $X$ with image $y\in Y$.
Let $i$ denote the closed immersion $X_y\to X$. We define $C_0$ to be the closure of the image $di(i^*C)$ in the cotangent bundle $T^*X_y$ of $X_y$. Note that $C_0$ is of dimension $\le n$ and that $X_y$ is equidimensional of dimension $n-1$.
By applying Lemma \ref{function} to the smooth scheme $X_y$ and the closed conical subset $C_0$, we can find, after replacing $X$ by an open neighborhood of $x$ if needed, a smooth morphism $g_0:X_y\to\mathbb A^1_y$ whose $C_0$-chracteristic locus $W_0\subset X_y$ is quasi-finite over $\mathbb A^1_y$.
We take a $Y$-morphism $g:X\to\mathbb A^1_Y$ inducing $g_0$ by base change.
Then $g$ is flat, and hence smooth, on a neighborhood of $X_y$, by a local criterion of flatness, \cite[Corollaire 5.9]{sga1}. Further, by Lemma \ref{fiberwise},
the $C$-characteristic locus $W$ of $g$ satisfies $W\times_Yy= W_0\cup Z_y$.
Since $W_0\cup Z_y$ is quasi-finite over $\mathbb A^1_y$, the closed subset $W$ is quasi-finite over an open neighborhood of $y\in Y$, which concludes the proof.
\end{proof}
\begin{comment} Next, we see that the characteristic cycle is determined by the Milnor formula for functions which are {\it properly }$C$-transversal outside isolated characteristic points (Corollary \ref{str uni}). We begin with an elementary lemma on proper $C$-transversality; \begin{lem}\label{properly curve} Let $f:X\to Y$ be a morphism of smooth schemes over a field $k$ with $X$ and $Y$ equidimensinal of dimension $n\ge1$ and $1$ respectively. Let $C$ be a closed conical subset of the cotangent bundle $T^*X$ which is equidimensinal of dimension $n$. We assume that $f$ is $C$-transversal. Then the following are equivalent; \begin{enumerate} \item $f:X\to Y$ is properly $C$-transversal, \item for any closed point of $y\in Y$ and any irreducible component $C_a$ of $C$, we have $C_a\not\subset T^*X\times_XX_y$. \end{enumerate} \end{lem}
\begin{proof} $1\Rightarrow2$ is clear, because if $C_a\subset T^*X\times_XX_y$ then $C_a\times_XX_y=C_a$ is of dimension $n$. We assume that the condition 1 holds. Then the morphism $C_a\to Y$ is flat if we regard $C_a$ as a reduced closed subscheme of $T^*X$. Thus, the converse $2\Rightarrow1$ also follows. \end{proof}
\begin{lem}\label{universal family} Assume that $X$ is smooth and equidimensinal of dimension $n$. Let $C$ be a closed conical subset of the cotangent bundle $T^*X$ which is equidimensinal of dimension $n$.
We assume that the very ample invertible sheaf $\mathcal L=i^*\mathcal O_{\mathbb P}(1)$ and the vector space $E$ satisfy the conditions (E) and (C) in \cite[Section 2]{SY}. Then there exists a dense open subset $U$ of $\mathbb G$ consisting of lines $L$ such that
\begin{enumerate} \item the restriction $p_L^\circ:X_L^\circ\to L$ of $p_L$ is properly $C$-transversal outside a finite set of closed point of $X_L^\circ$, \item for every irreducible component $C_a$ of $C$, the section $dp_L^\circ:X_L^\circ\to T^*X_L^\circ$ meets $C_a$ \item for every pair of distinct irreducible components $C_a\neq C_b$ of $C$, the section $dp_L^\circ:X_L^\circ\to T^*X_L^\circ$ does not meet $C_a\cap C_b$.
\end{enumerate} \end{lem} \begin{proof} By \cite[Lemma 2.3]{SY}, there exists a dense open subset $U_1\subset \mathbb G$ consisting of lines $L\subset \mathbb P^\vee$ satisfying and the conditions 2, 3, and \begin{description} \item{1'.} the restriction $p_L^\circ:X_L^\circ\to L$ of $p_L^\circ$ is $C$-transversal outside a finite set $F$ of closed point of $X_L^\circ$. \end{description} For each irreducible component $C_a$ of $C$ which is not contained in any fiber $T^*X\times_Xx$ with $x\in X$ a closed point, we apply Lemma \ref{base} to the base $B_a=C_a\cap T^*_XX\subset X$ of to take a dense open subset $U_a\subset \mathbb G$ consisting of lines $L\subset\mathbb P^\vee$ such that any hyperplane $H\subset\mathbb P$ belonging to $L$ does not contain $B_a$. We take $U$ as the intersection $U_1\cap \bigcap_aU_a$. Then, by Lemma \ref{properly curve}, $p_L^\circ$ is properly $C$-transversal outside $F$. \end{proof}
\begin{cor}\label{str uni} Let $X$ be a smooth scheme over an algebraically closed field $k$ which is equidimensional of dimension $n$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $X$ for finite field $\Lambda$ and $\Lambda'$ of characteristic invertible in $k$. We take a conical closed subset $C$ equidimensional of dimension $n$ which contains $\SS(\mathcal F)\cup\SS(\mathcal F')$. Assume that for every closed point $x\in X$ and every function $f:U\to\mathbb A^1$ on an open neighborhood $U\subset X$ of $x$ which is {\it properly }$C$-transversal outside $x$, we have an equality \begin{eqnarray*} (\mathop{CC}\nolimits(\mathcal F),df)_{T^*W,u}= (\mathop{CC}\nolimits(\mathcal F'),df)_{T^*W,u}. \end{eqnarray*} Then we have $\mathop{CC}\nolimits(\mathcal F)=\mathop{CC}\nolimits(\mathcal F')$. \end{cor} \begin{proof} Since the problem is local, we may assume that $X$ is affine. We show that there exists a function $f:U\to \mathbb A^1$ on an open subscheme $U$ of $X$ satisfying the following; \begin{enumerate} \item $f$ is properly $C$-transversal outside a finite set of closed point of $X$, \item for every irreducible component $C_a$ of $C$, the section $df$ meets $C_a$ \item for every pair of distinct irreducible components $C_a\neq C_b$ of $C$, the section $df$ does not meet $C_a\cap C_b$.
\end{enumerate}
We take an immersion $i:X\to \mathbb A^n\subset\mathbb P^n$. By replacing $X$ by $\mathbb P^n$ and $C$ by $i_\circ C$, we may assume that $X$ is projective. By \cite[Lemma 3.19]{S}, we can take an embedding $X\to \mathbb P$ satisfying the conditions (E) and (C). Then the assertion follows from Lemma \ref{universal family}. \end{proof}
Although the above lemma is sufficient for our purpose, let us state a following refinement which also generalizes \cite[Lemma 2.3]{SY}.
\begin{lem}[{\cite[Lemma 2.3]{SY}}] Let $X$ be a smooth projective scheme equidimensional of dimension $n$ over an algebraically closed field $k$ with a closed immersion $i:X\to \mathbb P$ into the projective space $\mathbb P=\mathbb P(E^\vee)$ for a vector space $E$ over $k$. Let $C\subset T^*X$ be a closed conical subset equidimensional of dimension $n$. Let $\mathbb G=\mathrm{Gr}(1,\mathbb P^\vee)$ be the Grassmanian variety parameterizing lines in $\mathbb P^\vee$. We assume that the very ample invertible sheaf $\mathcal L=i^*\mathcal O_{\mathbb P}(1)$ and the vector space $E$ satisfy the conditions (E) and (C) in \cite[Section 2]{SY}. Then there exists a dense open subset $U\subset \mathbb G$ consisting of lines $L\subset\mathbb P^\vee$ satisfying the following properties; \begin{enumerate} \item
\end{enumerate} \end{lem}
\begin{proof}
\end{proof} \end{comment}
\subsection{Proof of Corollary \ref{curve}}\label{subsec pf} We prove a special case of Conjecture \ref{psi sc} assuming the case 1 of Corollary \ref{psi sc curve}, but not the case 2.
\begin{lem}\label{orgogozo2} Let $f:X\to S$ be a morphism of smooth schemes over a perfect field $k$. Let $x$ be a geometric point of $X$ lying above a closed point. Let $\mathcal F$ be a constructible complex of $\Lambda$-modules on $X$, for a finite field $\Lambda$ of characteristics invertible in $k$. We assume that the $\SS(\mathcal F)$-characteristic locus (Definition \ref{tr}.1) of $f$ is quasi-finite over $S$. Then \begin{enumerate} \item the formation of $R\Psi_f\mathcal F$ commutes with base change, \item $\sigma_x^*R\Psi_f\mathcal F\cong Rf_{(x)*}\mathcal F$ is a constructible complex on $Y_{(y)}$. \end{enumerate} \end{lem} \begin{proof} 1. Since $f$ is universally locally acyclic outside the $\SS(\mathcal F)$-characteristic locus by the definition of singular support \cite[1.3]{B}, the assertion follows from Proposition \ref{orgogozo}.
2. Follows from the assertion 1 and Remark \ref{const}.
\end{proof} The assertion 2 of Lemma \ref{orgogozo2} enables us to state the assertion 1 in the following proposition.
\begin{prop}[Saito]\label{iso sing} Let $f:X\to S$ be a morphism of smooth schemes over a perfect field $k$. Let $x$ be a geometric point of $X$ lying above a closed point. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $X$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible in $k$. We assume that the $\SS(\mathcal F)\cup\SS(\mathcal F')$-characteristic locus (Definition \ref{tr}.1) of $f$ is quasi-finite over $S$
and that $\mathcal F|_{X_{(x)}}$ and $\mathcal F'|_{X_{(x)}}$ have universally the same conductors over $X_{(x)}$. Then \begin{enumerate} \item $Rf_{(x)*}\mathcal F$ and $Rf_{(x)*}\mathcal F'$ have universally the same conductors over $S_{(s)}$, where $s$ is the image of $x$ in $S$, \item when $S$ is a smooth curve, the stalks $R\psi_x(\mathcal F,f)$ and $R\psi_x(\mathcal F',f)$ have the same wild ramification in the sense in Theorem \ref{rpsi have swr}. \end{enumerate}
\end{prop}
\begin{proof}
Let us first note that, when $S$ is a smooth curve, $R\psi_x(\mathcal F,f)$ and $R\psi_x(\mathcal F',f)$ have the same wild ramification if and only if $Rf_{(x)*}\mathcal F$ and $Rf_{(x)*}\mathcal F$ do. In fact, we have canonical isomorphisms \begin{eqnarray*} (Rf_{(x)*}\mathcal F)_{\bar\eta}&\cong& R\psi_x(\mathcal F,f),\\ (Rf_{(x)*}\mathcal F)_{s}&\cong& \mathcal F_x, \end{eqnarray*} where $\bar\eta$ is a generic geometric point of $S_{(s)}$. In particular, the assertion 2 follows from the assertion 1. In the following we show the assertion 1.
We may assume that $k$ is algebraically closed. By replacing $f:X\to S$ by the second projection $X\times_kS\to S$ and $\mathcal F$ and $\mathcal F'$ by the direct images $\Gamma_{f*}\mathcal F$ and $\Gamma_{f*}\mathcal F'$ by the graph $\Gamma_f:X\to X\times_kS$ of $f$, we may assume that $f$ is smooth. Since the assumption implies, by Lemma \ref{orgogozo2}, that formations of $R\Psi_f\mathcal F$ and $R\Psi_f\mathcal F'$ commute with base change, we can use Lemma \ref{bc stalk} to reduce the problem to the case where $S$ is a smooth curve.
We prove the assertion for a smooth morphism $f$ to a smooth curve by induction on $\dim X$.
The case where $\dim X\le2$ is already proved in Corollary \ref{psi sc curve}.
In general, we take a smooth morphism to a smooth surface; by Lemma \ref{decomp}, we can find, shrinking $X$ if needed, a smooth morphism $g:X\to Y=\mathbb A^1_S$ whose $\SS(\mathcal F)\cup\SS(\mathcal F')$-characteristic locus $W$ is quasi-finite over $Y$. Then, by Lemma \ref{orgogozo2},
the formations of $R\Psi_g\mathcal F$ and $R\Psi_g\mathcal F'$ commute with base change.
We claim that the complexes $Rg_{(x)*}\mathcal F$ and $Rg_{(x)*}\mathcal F'$
have universally the same conductors over $Y_{(y)}$, where $y$ is the image of $x$ in $Y$.
Let $h:C\to Y$ be a morphism from a smooth curve $C$ and $v$ be a geometric point of $C$ lying over $y$. Let $X'$ denote the fiber product $X\times_YC$ and $x'$ be a geometric point of $X'$ lying above $x$ and $v$. We fix notations by the following Cartesian diagram; \begin{eqnarray}\label{cart}\begin{xy}\xymatrix {X'\ar[d]_{g'}\ar[r]^{h'}&X\ar[d]^g\\ C\ar[r]_h&Y}\end{xy}\end{eqnarray}
Then by Lemma \ref{bc stalk}, we have a canonical isomorphism \begin{eqnarray*} h_{(y')}^*Rg_{(x)*}\mathcal F\cong Rg'_{(x')*}{h'}^*\mathcal F. \end{eqnarray*}
Note that we have $\dim X'=\dim X-1$ since $g$ is flat. We also note that Lemma \ref{bc tr} implies that $h'$ is $\SS(\mathcal F)\cup\SS(\mathcal F')$-transversal and that the morphism $g':X'\to C$ is ${h'}^\circ\SS(\mathcal F)\cup{h'}^\circ\SS(\mathcal F')$-transversal outsider outside $W\times_YC$, which is quasi-finite over $C$. Since, we have, by the definition of singular support \cite[1.3]{B}, $\SS({h'}^*\mathcal F)\subset {h'}^\circ\SS(\mathcal F)$ and the corresponding inclusion for $\mathcal F'$, the $\SS({h'}^*\mathcal F)\cup\SS({h'}^*\mathcal F')$-characteristic locus of $g'$ is quasi-finite over $C$. Thus, by the induction hypothesis, $Rg_{(x)*}\mathcal F$ and $Rg_{(x)*}\mathcal F'$ have universally the same conductors over $Y_{(y)}$.
Since $Y_{(y)}$ is of dimension $2$, we can apply Corollary \ref{psi sc curve} to obtain
the assertion (using the remark at the beginning of the proof). \end{proof}
\begin{proof}[Proof of Corollary \ref{curve} (Saito)] The assumption implies
that $\mathcal F|_{X_{(x)}}$ and $\mathcal F'|_{X_{(x)}}$ have universally the same conductors over $X_{(x)}$. Since the problem is \'etale local, we may assume that $\mathcal F$ and $\mathcal F'$ have universally the same conductors over $X$. We take an \'etale morphism $j:W\to X$, a morphism $f:W\to Y$ to a smooth curve $Y$, and a point $u\in W$ such that $f$ is $C$-transversal outside $u$. Then, by Proposition \ref{iso sing}, we have \begin{eqnarray*} \mathop{\mathrm{dimtot}}\nolimits R\phi_u(j^*\mathcal F,f) = \mathop{\mathrm{dimtot}}\nolimits R\phi_u(j^*\mathcal F',f). \end{eqnarray*} Thus, by the definition of characteristic cycle, we have $\mathop{CC}\nolimits(\mathcal F)=\mathop{CC}\nolimits(\mathcal F')$.
\end{proof}
\appendix \def\Alph{section}{\Alph{section}} \section{$\ell$-adic systems of complexes with a pro-finite group action} \label{ell adic} We construct in Proposition \ref{inverse limit of complexes} an ``inverse limit'' of an $\ell$-adic system of perfect complexes of continuous $G$-modules for an admissible $G$ relatively to $\zz_\ell$ (for the definition of admissibility, see Definition \ref{alm prime-to-ell}).
In the following, a morphism of complexes means a morphism in the category of complexes (not in the homotopy category).
\begin{prop}[{c.f.\ \cite[Proposition 10.1.15]{Fu} and \cite[\S3.3]{Houzel}}] \label{inverse limit of complexes} Let $G$ be a pro-finite group which is admissible relatively to $\zz_\ell$ and $(K_n)_{n\geq1}$ be an inverse system of complexes of continuous $\zz_\ell[G]$-modules finitely generated over $\zz_\ell$ satisfying the following properties; \begin{itemize} \item for each $n\geq1$, $K_n^i$ is a projective continuous $\zz/\ell^n\zz[G]$-module,
\item the transition morphism $K_{n+1}\to K_n$ gives a quasi-isomorphism $$K_{n+1}\otimes^L_{\mathbb Z/\ell^{n+1}\mathbb Z}\zz/\ell^n\zz\to K_n.$$ \end{itemize} Then, \begin{enumerate} \item there exists a bounded complex $K$ of projective $\zz_\ell[G]$-modules finitely generated over $\zz_\ell$ together with a sequence of quasi-isomorphism $(K\otimes^L_{\zz_\ell}\zz/\ell^n\zz\to K_n)_{n\geq1}$ compatible with transition morphisms. \item For a complex $K$ as above, we have \begin{enumerate} \item a natural isomorphism \begin{align*} \varprojlim_nH^i(K_n)\cong H^i(K), \end{align*} \item for an element $g\in G$ with surnatural order prime-to-$\ell$, an equality \begin{align*} \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,K_1) = \sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits(g,H^i(K)\otimes_{\zz_\ell}\qq_\ell). \end{align*} \end{enumerate} \end{enumerate} \end{prop}
The key of the proof of Proposition \ref{inverse limit of complexes} is Lemma \ref{lift qis} below, which we deduce from Lemmas \ref{lift acyclic}, \ref{lift homotopic}, and \ref{lift split inj}.
In these Lemmas, we work with an Artinian local ring $A$ with a proper ideal $I$ and a profinite group $G$ which is admissible relatively to $A$. We write $A_0$ for the quotient $A/I$. Similarly, for an $A$-module $M$, we write $M_0$ for the quotient $M/IM$.
\begin{lem}[{c.f.\ \cite[Lemma 10.1.10]{Fu} and \cite[Corollaire in \S3.3]{Houzel}}] \label{lift acyclic} Let $C_0$ be a bounded acyclic complex of projective continuous $A_0[G]$-modules finitely generated over $A_0$. Then there exists a bounded acyclic complex $C$ of projective continuous $A[G]$-modules finitely generated over $A$ with an isomorphism $C\otimes_{A}A_0\cong C_0$. \end{lem}
\begin{proof} Note that the assumption that $A$ is Artinian assures that every projective $A_0[G]$-module finitely generated over $A_0$ admits a lift to a projective $A[G]$-module finitely generated over $A$. In fact, the problem can be reduced to the case where $G$ is finite (Lemma \ref{char of proj}). Then it follows from \cite[14.3, Proposition 41]{Se}.
Arguing by induction on the length of the complex $C$, the problem is reduced to showing the following: Let $N$ be a projective $A[G]$-module finitely generated over $A$. Then every short exact sequence $0\to L_0\to M_0\to N_0\to 0$ of projective continuous $A_0[G]$-modules finitely generated over $A_0$, admits a lift, that is, a short exact sequence $0\to L\to M\to N\to 0$ of projective continuous $A[G]$-modules finitely generated over $A$ which lifts $0\to L_0\to M_0\to N_0\to 0$. For this, lift $M_0\to N_0$ to $M\to N$ and take $L$ as the kernel of the lift $M\to N$. \end{proof}
\begin{lem}[{c.f.\ \cite[Lemma 10.1.11]{Fu}}] \label{lift homotopic} Let $\phi:M\to N$ be a morphism of complexes of projective continuous $A[G]$-modules. Let $\psi:M_0\to N_0$ be a morphism of complexes which is homotopic to $\phi_0=\phi\otimes\mathrm{id}_{A_0}$. Then, there exists a morphism $\psi:M\to N$ which is homotopic to $\phi$ such that $\psi\otimes\mathrm{id}_{A_0}=\psi_0$. \end{lem} \begin{proof} For a homotopy $k_0:M_0\to N_0[-1]$ between $\phi_0$ and $\psi_0$, that is, a map $k_0$ satisfying $dk_0+k_0d=\phi_0-\psi_0$, pick a lift $k:M\to N[-1]$ of $k_0$ and define $\psi=\phi-dk-kd$. \end{proof}
\begin{lem} \label{lift split inj} For a morphism $\phi:M\to N$ of projective continuous $A[G]$-modules finitely generated over $A$, $\phi$ is a split injection if and only if $\phi_0=\phi\otimes\mathrm{id}_{A_0}$ is. \end{lem} \begin{proof} We write $M_0$ and $N_0$ for $M\otimes_AA_0$ and $N\otimes_AA_0$. We take a splitting $\psi_0:N_0\to M_0$ of $\phi_0$; so $\psi_0\circ\phi_0=\mathrm{id}$. By projectivity, there exists a lift $\psi$ of $\psi_0$. Thus, it suffices to show that lifts $M\to M$ of an automorphism $M_0\to M_0$ are automorphisms. But this follows from Nakayama's lemma (forget the actions of $G$). \end{proof} \begin{lem}[{c.f.\ \cite[Lemma 10.1.12]{Fu} and \cite[Lemme 1 in \S 3.3]{Houzel}}] \label{lift qis}
Let $M$ (resp.\ $N_0$) be a bounded complex of projective continuous $A[G]$-modules (resp. projective $A_0[G]$-modules) finitely generated over $A$ and let $\phi_0:M_0=M\otimes_{\zz_\ell}A_0\to N_0$ be a quasi-isomorphism. Then, there exists a bounded complex $N$ of projective continuous $A[G]$-modules finitely generated over $A$ with an isomorphism $N\otimes_{A}A_0\cong N_0$ and a quasi-isomorphism $\phi:M\to N$ such that $\phi\otimes\mathrm{id}_{A_0}=\phi_0$. \end{lem} \begin{proof} First, we reduce the problem to the case where $\phi_0^i:M_0^i\to N_0^i$ is surjective for every $i$. Let $C_0$ be the mapping cone of $\mathrm{id}_{N_0}:N_0\to N_0$. Then, by Lemma \ref{lift acyclic}, there exists a bounded complex $C$ of projective continuous $A[G]$-modules finitely generated over $A$ with an isomorphism $C\otimes^L_{A}A_0\cong C_0$. By replacing $M$ by $M\oplus C$ and $\phi_0:M_0\to N_0$ by the map $M_0\oplus C_0\to N_0$ defined by $M_0^i\oplus C_0^i=M_0^i\oplus N_0^i\oplus N_0^{i+1}\to N_0^i:(x,y,z)\to\phi(x)+y$, we may assume that $\phi_0^i:M_0^i\to N_0^i$ is surjective for every $i$.
By taking the kernel $K_0^i$ of the surjection $\phi_0^i:M_0^i\to N_0^i$, we obtain an acyclic complex $K_0$. By Lemma \ref{lift acyclic}, there exists a bounded acyclic complex $K$ of projective continuous $A[G]$-modules finitely generated over $A$ with an isomorphism $K\otimes^L_{A}A_0\cong K_0$. Since the morphism $\iota_0:K_0\to M_0$ is homotopic to zero, we can find, using Lemma \ref{lift homotopic}, a morphism $\iota:K\to M$ lifting $\iota_0$. Further, since $\iota_0^i:K_0^i\to M_0^i$ is a split injection for every $i$, we can use Lemma \ref{lift split inj} to deduce that $\iota^i:K^i\to M^i$ is a split injection for every $i$. Then it suffices to take $N^i$ as the cokernel of $\iota^i$, which is a projective continuous $A[G]$-module and $\phi$ as the natural surjection. \end{proof}
\begin{proof}[Proof of Proposition \ref{inverse limit of complexes}] 1. It suffices to construct a sequences of quasi-isomorphism $(u_n:K_n\to K'_n)_{n\ge1}$ such that $(K'_n)_{n\ge1}$ is an inverse systems of complexes of continuous $\zz_\ell[G]$-modules finitely generated over $\zz_\ell$ satisfying the same properties as $(K_n)_{n\ge1}$ and inducing an {\it isomorphism }$K'_{n+1}\otimes_{\mathbb Z/\ell^{n+1}\mathbb Z}\zz/\ell^n\zz\to K'_n$. In fact, from $(K'_n)_{n\ge1}$ we get a desired complex by taking inverse limit termwise.
We construct $(u_n:K_n\to K'_n)_{n\ge1}$ inductively as follows. Put ${{K'_1}}=K_1$ and $u_1=\mathrm{id}_{K_1}$. We assume that $n\ge2$ and we have defined a quasi-isomorphism $u_i:K_i\to K'_i$ for $i\le n-1$; \begin{eqnarray*} \begin{xy} \xymatrix{
K_n\ar@{.>}[rr]^{u_n}\ar[d]^{\text{mod }\ell^{n-1}}&&{K'_n}\ar@{.>}[d]^{\text{mod }\ell^{n-1}}\\ K_n/\ell^{n-1}\ar[r]&K_{n-1}\ar[r]^{u_{n-1}}&{K'_{n-1}}.
} \end{xy} \end{eqnarray*}
We apply Lemma \ref{lift qis} to the composite $K_n/\ell^{n-1}\to K_{n-1}\to {K'_{n-1}}$, we find a bounded complex ${K'_n}$ of finite projective continuous $\mathbb Z/\ell^n\mathbb Z[G]$-modules with an isomorphism ${K'_n}/\ell^{n-1}\cong {K'_{n-1}}$ and a quasi-isomorphism $K_{n}\to {K'_n}$ whose mod $\ell^{n}$ is the quasi-isomorphism $K_n/\ell^{n-1}\to {K'_{n-1}}$.
2. The isomorphism (a) follows from the fact that the inverse limit functor is exact on finite abelian groups. The equality (b) can be obtained as follows: \begin{align*} \mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,K\otimes^L_{\zz_\ell}\F_\ell) &=\sum_i(-1)^i\mathop{\mathrm{Tr}^{\mathrm{Br}}}\nolimits(g,K^i\otimes_{\zz_\ell}\F_\ell)\\ &=\sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits(g,K^i\otimes_{\zz_\ell}\qq_\ell)\\ &=\sum_i(-1)^i\mathop{\mathrm{Tr}}\nolimits(g,H^i(K)\otimes_{\zz_\ell}\qq_\ell).
\end{align*} Here the first equality is by definition and the second follows from projectivity.
\end{proof}
\section{An example of big local monodromy}\label{ex} We give an example of two sheaves which should obviously have the same wild ramification, but do not in the naive sense as in Remark \ref{unreasonable}. To do that we give an example showing that inertia groups in higher dimensional case are very big. The author learned the following from Takeshi Saito.
Let $k$ be a field with a separable closure $\bar k$ and $C$ be a dense open subscheme of $\mathbb P_k^1$. We consider the map $\mathrm{pr}:\mathbb P^2\setminus{(0:0:1)}\to\mathbb P^1;(x:y:z)\mapsto(x:y)$. We put $U=\mathrm{pr}^{-1} (C)$ and $X=\mathbb P^2$. Let $x$ be the origin $(0:0:1)$ of $\mathbb P^2_{\bar k}$. The following lemma shows that the inertia group $\pi_1(U\times_XX_{(x)},a)$, for a geometric point $a$, is very big. \begin{lem}\label{big} We assume that the geometric point $a$ lies above a geometric generic point $\bar\eta$ of $C_{\bar k}$. Then the natural morphism \begin{eqnarray*} \pi_1(U\times_XX_{(x)},a) \to \pi_1(C_{\bar k},\bar\eta) \end{eqnarray*} induced by the composite $U\times_XX_{(x)} \to U_{\bar k}\to C_{\bar k}$ is surjective. \end{lem} \begin{proof} We may assume that $k$ is separably closed. Let $X'\to X$ be the blowup of at $x$ and $\xi$ be the generic point of the exceptional divisor of $X'$. Then, we have a natural morphism $X'_{(\xi)}\to X_{(x)}$.
Let $a'$ be a geometric point of $U\times_{X'}X'_{(\xi)}$ lying above $a$. We consider a commutative diagram \begin{eqnarray}\label{blowup} \begin{xy} \xymatrix{ \pi_1(U\times_{X'}X'_{(\xi)},a')\ar[r]\ar[d]^\alpha & \pi_1(U\times_{X}X_{(x)},a)\ar[r]& \pi_1(U,a) \ar[d] \\ \pi_1(X'_{(\xi)},a')\ar[rr]^\beta &
& \pi_1(C,\bar\eta).} \end{xy} \end{eqnarray} Since the composite $\xi\to X'_{(\xi)}\to \eta$ is an isomorphism, the homomorphism $\pi_1(X'_{(\xi)},a')\to\pi_1(\eta,\bar\eta)$ is an isomorphism. Thus, $\beta$ is surjective. Since $\alpha$ is surjective, we obtain the assertion by a diagram chasing. \end{proof}
We make it precise what the naive sense means. Let $X$ be an excellent noetherian scheme and $U$ a dense open subscheme of $X$.
For an $\mathbb F_\ell$-sheaf $\mathcal F$ and an $\mathbb F_{\ell'}$-sheaf $\mathcal F'$ on $U_{\text{\'et}}$ which are locally constant and constructible, having the same wild ramification over $X$ in the naive sense means that, under the notations in Definition \ref{swr}, for every $g\in G$ wildly ramified on $X$, we have $\dim_\Lambda M^g=\dim_{\Lambda'}(M')^g$. As mentioned in Remark \ref{unreasonable}, having the same wild ramification over $X$ in the naive is stronger than having the same wild ramification over $X$. Through an example we will see that it is unreasonably strong.
Assume that the characteristic $p$ of $k$ is different from $2$ and $3$. Let $f:E\to C=\mathbb P_k^1\setminus\{0,1,\infty\}$ be the Legendre family of elliptic curves defined by the equation $y^2=x(x-1)(x-\lambda)$, $\lambda\in\mathbb P^1\setminus\{0,1,\infty\}$.
This family has big monodromy; for example, it has the following property. \begin{thm} \label{legendre}
The monodromy representation $\pi_1(C_{\bar k},\bar\eta)\to SL_2(\mathbb F_\ell)$ associated to $R^1f_*\mathbb F_\ell$ is surjective for any prime number $\ell\ne2,p$.
\end{thm}
We denote the natural map $U\to C$ by $\pi$. We consider the sheaf \begin{eqnarray*} \mathcal F_\ell=\pi^*R^1f_*\mathbb F_\ell, \end{eqnarray*} which is a locally constant and constructible sheaf of rank $2$ on $U$.
We note that $R^1f_*\mathbb F_\ell$ is
tamely ramified along $0,1,\infty$. In particular, $R^1f_*\mathbb F_\ell$ and $R^1f_*\mathbb F_{\ell'}$ for prime numbers $\ell$ and $\ell'$ which are distinct from the characteristic of $k$ have the same wild ramification over $k$. Since having the same wild ramification is preserved by pullback \cite[Lemma 2.4]{K}, $\mathcal F_\ell$ and $\mathcal F_{\ell'}$ have the same wild ramification over $k$.
\begin{cor} \begin{enumerate} \item The monodromy representation $\pi_1(U\times_{X}X_{(\bar x)},a)\to SL_2(\mathbb F_\ell)$ associated to $\mathcal F_\ell$ is surjective for $\ell\ne2,p$. \item We assume that $k$ is of positive characteristic $p$. Then, if $\ell\ne2$ is a prime number such that $\ell\equiv\pm1\mod p$ and $\ell'\ne p$ is a prime number such that $\ell'\not\equiv\pm1\mod p$, then $\mathcal F_{\ell}$ and $\mathcal F_{\ell'}$ do not have the same wild ramification in the naive sense. \end{enumerate} \end{cor} \begin{proof} 1. follows from Lemma \ref{big} and Theorem \ref{legendre}.
2. We note that for any (pointed) Galois \'etale covering of $U\times_XX_{(\bar x)}$ with Galois group $G$, the image of any element $\sigma\in\pi_1(U\times_{X}X_{(\bar x)},a)$ by $\pi_1(U\times_{X}X_{(\bar x)},a)\to G$ is ramified on $X_{(\bar x)}$ in the sense of Definition \ref{ramified}. Then the assertion follows since the order of the finite group $SL_2(\mathbb F_\ell)$ is $\ell(\ell-1)(\ell+1)$.
\end{proof}
Further, this construction with $\mathbb F_\ell$ replaced by $\overline\qq_\ell$ gives an example of a sheaf whose Frobenius traces are rational numbers but whose monodromy along boundary has eigenvalues which are not algebraic numbers.
We use the following big monodromy theorem instead of Theorem \ref{legendre}. \begin{thm} \label{legendre ql} The image of the monodromy representation $\pi_1(C_{\bar k},\bar\eta)\to SL_2(\overline\qq_\ell)$ associated to $R^1f_*\overline\qq_\ell$ is open. In particular, there exists $\gamma\in\pi_1(C_{\bar k},\bar\eta)$ such that the eigenvalues of the action of $\gamma$ on $(R^1f_*\overline\qq_\ell)_{\bar\eta}$ are not algebraic numbers. \end{thm}
\begin{cor} There exists an element $\sigma\in\pi_1(U\times_{X}X_{(\bar x)},a)$ such that the eigenvalues of the action of $\sigma$ on $(\pi^*R^1f_*\overline\qq_\ell)_{a}$ are not algebraic numbers. \end{cor} \begin{proof} This follows from Theorem \ref{legendre ql} and Lemma \ref{big}. \end{proof}
In the terminologies of \cite{LZ}, the above construction gives a compatible system which is not compatible along a boundary. We assume $k$ is a finite field. Let $\mathbb L$ be a set of prime numbers distinct from the characteristic of $k$. Then, $(\mathcal F_\ell)_{\ell\in\mathbb L}$ is $\mathbb Q$-compatible in the sense in \cite[Definition 1.14]{Z}. But it is not compatible on $X$ in the sense in \cite[Definition 1.1]{LZ}.
\begin{comment}
\section{Complements} We recall a valuative criterion of having the same wild ramification due to Gabber. \begin{lem}[{\cite[6.1]{Vnodal}}] \label{valuative} Let $S$ be an excellent noetherian scheme and $U$ an $S$-scheme separated of finite type. Let $\Lambda$ and $\Lambda'$ be finite fields whose characteristics are invertible on $S$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $U_{\text{\'et}}$. Then the following are equivalent: \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have the same wild ramification over $S$. \item We consider diagrams of the form \begin{eqnarray*} \begin{xy} \xymatrix{ \mathop{\mathrm{Spec}}\nolimits K\ar[r]^g\ar[d]^j&X\ar[d]\\ \mathop{\mathrm{Spec}}\nolimits\mathcal O\ar[r]&S, } \end{xy} \end{eqnarray*} where $\mathcal O$ is a strictly henselian valuation ring with function field $K$ and $j$ is the canonical morphism. Then, for every diagram as above and for every pro-$p$ element $\sigma\in\mathrm{Gal}(\overline K/K)$, where $\overline K$ is a separable closure of $K$, we have \begin{eqnarray*} \dim_\Lambda((g^*\mathcal F)_{\overline K})^\sigma=\dim_\Lambda((g^*\mathcal F')_{\overline K})^\sigma. \end{eqnarray*} \end{enumerate} \end{lem}
\begin{lem} Let $X$ be an excellent noetherian scheme and $x$ a geometric point of $X$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $X$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible on $X$. Then the following are equivalent: \begin{enumerate}
\item
$\mathcal F|_{X_{(x)}}$ and $\mathcal F'|_{X_{(x)}}$ have the same wild ramification over $X_{(x)}$, \item for every strictly henselian valuation ring $\mathcal O$ with function field $K$, for every morphism $g:\mathop{\mathrm{Spec}}\nolimits\mathcal O\to X_{(x)}$ and for every pro-$p$ element $\sigma\in\mathrm{Gal}(\overline K/K)$, where $\overline K$ is a separable closure of $K$, we have \begin{eqnarray*} \dim_\Lambda((g^*\mathcal F)_{\overline K})^\sigma=\dim_\Lambda((g^*\mathcal F')_{\overline K})^\sigma. \end{eqnarray*} \end{enumerate} \end{lem}
\begin{cor} Let $\overline X$ be a compactification of $X$ over $S$. Then $\mathcal F$ and $\mathcal F'$ have the same wild ramificaiton over $S$ if and only if they have the same wild ramification at every point of $\overline X$. \end{cor}
\begin{defn} \label{sc at} Let $X$ be an excellent noetherian scheme such that the residue field of every closed point is perfect, and $x$ a geometric point over a closed point of $X$. Let $\mathcal F$ and $\mathcal F'$ be constructible complexes of $\Lambda$-modules and $\Lambda'$-modules respectively on $X$, for finite fields $\Lambda$ and $\Lambda'$ of characteristics invertible on $X$. We say $\mathcal F$ and $\mathcal F'$ {\it have universally the same conductors at }$x$ if, for every morphism $g:C\to X$ from a regular $X$-curve $C$ and for every geometric point $v$ of $C$ lying over $x$, we have an equality $a_v(g^*\mathcal F)=a_v(g^*\mathcal F')$, \end{defn} We note that we have an obvious characterization of the notion ``having universally the same conductors'': \begin{lem} \label{obvious char} The following are equivalent. \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have universally the same conductors over $S$, \item We consider diagrams of the form \begin{eqnarray*} \begin{xy} \xymatrix{ \eta\ar[r]^g\ar[d]^j&X\ar[d]\\ T\ar[r]&S, } \end{xy} \end{eqnarray*} where $T$ is a strictly henselian trait with generic point $\eta$ and with algebraically closed residue field and $j$ is the canonical open immersion, such that the closed point of $T$ lies over a closed point of $S$. Then, for every diagram as above, we have an equality of the Artin conductors; $a(j_!g^*\mathcal F)=a(j_!g^*\mathcal F')$. \end{enumerate} \end{lem}
\begin{lem}\label{sc at x} The following are equivalent: \begin{enumerate} \item $\mathcal F$ and $\mathcal F'$ have universally the same conductors at $x$, \item
$\mathcal F|_{X_{(x)}}$ and $\mathcal F'|_{X_{(x)}}$ have universally the same conductors over $X_{(x)}$, \item for every strictly henselian discrete valuation ring $\mathcal O$ with function field $K$ and with algebraically closed residue field, every morphism $g:\mathop{\mathrm{Spec}}\nolimits\mathcal O\to X_{(x)}$ which is local, i.e., the closed point of $\mathop{\mathrm{Spec}}\nolimits \mathcal O$ goes to $x$, and every $\sigma\in\mathrm{Gal}(\overline K/K)$, we have an equality of the Artin conductors; $a(g^*\mathcal F)=a(g^*\mathcal F')$. \end{enumerate} \end{lem} \begin{proof} Both conditions are equivalent to the condition 2 in Lemma \ref{obvious char} for $X=X_{(x)}$ over $S=X_{(x)}$. \end{proof}
\begin{rem}[A complement on Definition \ref{swr for lisse}] The system of inertia subgroups is a system with surjective transitions. \end{rem}
\begin{lem} The diagram \be\begin{xy}\xymatrix{ R\psi(j_!\Lambda)\ar[d]\ar[r]& R\psi(j_!h_*\Lambda)\\ R\Gamma^G(R\psi(j_!h_*\Lambda))\ar[ru] } \end{xy}\ee is commutative. \end{lem}
\end{comment}
\end{document} | arXiv |
\begin{definition}[Definition:Convergent Product/Number Field]
Let $\mathbb K$ be one of the standard number fields $\Q, \R, \C$.
\end{definition} | ProofWiki |
\begin{document}
\title{Semi-stable Higgs sheaves and Bogomolov type inequality } \subjclass[]{53C07, 58E15} \keywords{Higgs sheaf, approximate Hermitian-Einstein structure, Bogomolov inequality. } \author{Jiayu Li} \address{School of Mathematical Sciences\\ University of Science and Technology of China\\ Hefei, 230026\\ and AMSS, CAS, Beijing, 100080, P.R. China\\} \email{[email protected]} \author{Chuanjing Zhang} \address{School of Mathematical Sciences\\ University of Science and Technology of China\\ Hefei, 230026,P.R. China\\ } \email{[email protected]} \author{Xi Zhang} \address{School of Mathematical Sciences\\ University of Science and Technology of China\\ Hefei, 230026,P.R. China\\ } \email{[email protected]} \thanks{The authors were supported in part by NSF in China, No. 11571332, 11131007, 11526212.}
\begin{abstract} In this paper, we study semistable Higgs sheaves over compact K\"ahler manifolds, we prove that there is an approximate admissible Hermitian-Einstein structure on a semi-stable reflexive Higgs sheaf and consequently, the Bogomolove type inequality holds on a semi-stable reflexive Higgs sheaf. \end{abstract}
\maketitle
\section{Introduction} \setcounter{equation}{0}
\hspace{0.4cm}
Let $(M, \omega )$ be a compact K\"ahler manifold, and $E$ be a holomorphic vector bundle on $M$. Donaldson-Uhlenbeck-Yau theorem states that the $\omega$-stability of $E$ implies the existence of $\omega$-Hermitian-Einstein metric on $E$. Hitchin \cite{Hi} and Simpson \cite{Si} proved that the theorem holds also for Higgs bundles. We \cite{LZ2} proved that there is an approximate Hermitian-Einstein structure on a semi-stable Higgs bundle, which confirms a conjecture due to Kobayashi \cite{Ko2} (also see \cite{Ja}). There are many interesting and important works related (\cite{LY, Hi, Si, Br1, BS, GP, BG, AG1, Bi, BT, LN1, LN2, M, Mo1, Mo}, etc.). Among all of them, we recall that, Bando and Siu \cite{BS} introduced the notion of admissible Hermitian metrics on torsion-free sheaves, and proved the Donaldson-Uhlenbeck-Yau theorem on stable reflexive sheaves.
Let $\mathcal{E}$ be a torsion-free coherent sheaf, and $\Sigma $ be the set of singularities where $\mathcal{E}$ is not locally free. A Hermitian metric $H$ on the holomorphic bundle $\mathcal{E}|_{M\setminus \Sigma }$ is called {\it admissible } if
(1) $|F_{H}|_{H, \omega}$ is square integrable;
(2) $|\Lambda_{\omega } F_{H}|_{H}$ is uniformly bounded.\\ Here $F_{H}$ is the curvature tensor of Chern connection $D_{H}$ with respect to the Hermitian metric $H$, and $\Lambda_{\omega }$ denotes the contraction with the K\"ahler metric $\omega $.
Higgs bundle and Higgs sheaf are studied by Hitchin (\cite{Hi}) and Simpson (\cite{Si}, \cite{Si2}), which play an important role in many different areas including gauge theory, K\"ahler and hyperk\"ahler geometry, group representations, and nonabelian Hodge theory. A Higgs sheaf on $(M, \omega )$ is a pair $(\mathcal{E}, \phi )$ where $\mathcal{E}$ is a coherent sheaf on $M$ and the Higgs field $\phi \in \Omega^{1,0 }(\mathrm{End} (\mathcal{E}))$ is a holomorphic section such that $\phi \wedge \phi =0 $. If the sheaf $\mathcal{E}$ is torsion-free (resp. reflexive, locally free), then we say the Higgs sheaf $(\mathcal{E}, \phi )$ is torsion-free (resp. reflexive, locally free). A torsion-free Higgs sheaf $(\mathcal{E}, \phi )$ is said to be
$\omega $-stable (respectively, $\omega$-semi-stable), if for every $\phi$-invariant coherent proper sub-sheaf $\mathcal{F}\hookrightarrow \mathcal{E}$, it holds: \begin{eqnarray} \mu_{\omega} (\mathcal{F})=\frac{\deg_{\omega} (\mathcal{F})}{\mathrm{rank} (\mathcal{F})}< (\leq ) \mu_{\omega} (\mathcal{E})=\frac{\deg_{\omega} (\mathcal{E})}{\mathrm{rank} (\mathcal{E})}, \end{eqnarray} where $\mu_{\omega} (\mathcal{F})$ is called the $\omega$-slope of $\mathcal{F}$.
Given a Hermitian metric $H$ on the locally free part of the Higgs sheaf $(\mathcal{E}, \phi )$, we consider the Hitchin-Simpson connection
\begin{equation}\overline{\partial}_{\phi}:=\overline{\partial}_{\mathcal{E}}+\phi , \quad D_{H, \phi }^{1, 0}:=D_{H }^{1, 0} +\phi ^{\ast H}, \quad D_{H, \phi }= \overline{\partial}_{\phi}+ D_{H, \phi }^{1, 0},\end{equation} where $D_{H}$ is the Chern connection with respect to the metric $H$ and $\phi ^{\ast H}$ is the adjoint of $\phi $ with respect to $H$. The curvature of the Hitchin-Simpson connection is \begin{equation} F_{H, \phi}=F_{H} +[\phi , \phi ^{\ast H}] +D_{H}^{1, 0}\phi + \overline{\partial }_{\mathcal{E}} \phi^{\ast H}, \end{equation} where $F_{H}$ is the curvature of the Chern connection $D_{H}$. A Hermitian metric $H$ on the Higgs sheaf $(\mathcal{E}, \phi )$ is said to be admissible Hermitian-Einstein if it is admissible and
satisfies the following Einstein condition on $M\setminus \Sigma $, i.e \begin{equation}\label{HE} \sqrt{-1}\Lambda_{\omega} (F_{H} +[\phi , \phi ^{\ast H}]) =\lambda \mathrm{Id}_{\mathcal{E}}, \end{equation} where $\lambda $ is a constant given by $\lambda =\frac{2\pi}{\mathrm{Vol}(M, \omega)} \mu_{\omega} (\mathcal{E})$. Hitchin (\cite{Hi}) and Simpson (\cite{Si}) proved that a Higgs bundle admits a Hermitian-Einstein metric if and only if it's Higgs poly-stable. Biswas and Schumacher \cite{BiS} studied the Donaldson-Uhlenbeck-Yau theorem for reflexive Higgs sheaves.
In this paper, we study the semi-stable Higgs sheaves. We say a torsion-free Higgs sheaf $(\mathcal{E}, \phi )$ admits an approximate admissible Hermitian-Einstein structure if for every positive $\delta $, there is an admissible Hermitian metric $H_{\delta }$ such that \begin{equation}
\sup _{x\in M\setminus \Sigma } |\sqrt{-1}\Lambda_{\omega }(F_{H_{\delta}}+[\phi , \phi^{\ast H_{\delta}}])-\lambda \mathrm{Id}_{\mathcal{E}}|_{H_{\delta}}(x)<\delta . \end{equation} The approximate Hermitian-Einstein structure was introduced by Kobayashi (\cite{Ko2}) on a holomorphic vector bundle, it is the differential geometric counterpart of the semi-stability. Kobayashi \cite{Ko2} proved there is an approximate Hermitian-Einstein structure on a semi-stable holomorphic vector bundle over an algebraic manifold, which he conjectured should be true over any K\"ahler manifold. The conjecture was confirmed in \cite{Ja,LZ2}. In this paper, we proved our theorem holds for a semi-stable reflexive Higgs sheaf over a compact K\"ahler manifold.
\begin{theorem}\label{thm 1.1} A reflexive Higgs sheaf $(\mathcal{E}, \phi )$ on an $n$-dimensional compact K\"ahler manifold $(M, \omega)$ is semi-stable, if and only if it admits an approximate admissible Hermitian-Einstein structure. Specially, for a semi-stable reflexive Higgs sheaf $(\mathcal{E}, \phi )$ of rank $r$, we have the following Bogomolov type inequality \begin{equation}\label{Bog1} \int_{M} (2c_{2}(\mathcal{E})-\frac{r-1}{r}c_{1}(\mathcal{E})\wedge c_{1}(\mathcal{E}))\wedge\frac{\omega^{n-2}}{(n-2)!}\geq 0 . \end{equation} \end{theorem}
The Bogomolov inequality was first obtained by Bogomolov (\cite{Bo}) for semi-stable holomorphic vector bundles over complex algebraic surfaces, it had been extended to certain classes of generalized vector bundles, including parabolic bundles and orbibundles. By constructing a Hermitian-Einstein metric, Simpson proved the Bogomolov inequality for stable Higgs bundles on compact K\"ahler manifolds. Recently, Langer (\cite{La}) proved the Bogomolov type inequality for semi-stable Higgs sheaves over algebraic varieties by using an algebraic-geometric method. His method can not be applied to the K\"ahler manifold case. We use analytic method to study the Bogomolov inequality for semi-stable reflexive Higgs sheaves over compact K\"ahle manifolds, new idea is needed.
We now give an overview of our proof. As in \cite{BS}, we make a regularization on the reflexive sheaf $\mathcal{E}$, i.e. take blowing up with smooth centers finite times $\pi_{i}: M_{i}\rightarrow M_{i-1}$, where $i=1, \cdots , k$ and $M_{0}=M$, such that the pull-back of $\mathcal{E}^{\ast}$ to $M_{k} $ modulo torsion is locally free and
\begin{equation}
\pi = \pi_{1}\circ \cdots \circ \pi_{k}: M_{k}\rightarrow M
\end{equation}
is biholomorphic outside $\Sigma $. In the following, we denote $M_{k}$ by $\tilde{M}$, the exceptional divisor $\pi^{-1} \Sigma $ by $\tilde{\Sigma }$, and the holomorphic vector bundle $(\pi^{\ast}\mathcal{E}^{\ast}/torsion)^{\ast}$ by $E$. Since $\mathcal{E}$ is locally free outside $\Sigma $, and the holomorphic bundle $E$ is isomorphic to $\mathcal{E}$ on $\tilde{M}\setminus \tilde{\Sigma}$, the pull-back field $\pi^{\ast}\phi $ is a holomorphic section of $\Omega^{1,0 }(\mathrm{End} (E))$ on $\tilde{M}\setminus \tilde{\Sigma}$. By Hartogs' extension theorem, the holomorphic section $\pi^{\ast}\phi $ can be extended to the whole $\tilde{M}$ as a Higgs field of $E$. In the following, we also denote the extended Higgs field $\pi^{\ast}\phi $ by $\phi $ for simplicity. So we get a Higgs bundle $(E, \phi )$ on $\tilde{M}$ which is isomorphic to the Higgs sheaf $(\mathcal{E}, \phi )$ outside the exceptional divisor $\tilde{\Sigma }$.
It is well known that $\tilde{M}$ is also K\"ahler (\cite{GH}). Fix a K\"ahler metric $\eta $ on $\tilde{M}$ and set
\begin{equation}\omega_{\epsilon }=\pi^{\ast}\omega +\epsilon \eta \end{equation}
for any small $0<\epsilon \leq 1$. Let $K_{\epsilon}(t, x, y)$ be the heat kernel with respect to the K\"ahler metric $\omega_{\epsilon}$. Bando and Siu (Lemma 3 in \cite{BS}) obtained a uniform Sobolev inequality for $(\tilde{M}, \omega_{\epsilon})$, using Cheng and Li's estimate (\cite{CL}), they got a uniform upper bound of the heat kernels $K_{\epsilon}(t, x, y)$.
Given a smooth Hermitian metric $\hat{H}$ on the bundle $E$, it is easy to see that there exists a constant $\hat{C}_{0}$ such that
\begin{equation}\label{initial1}
\int_{\tilde{M}}( |\Lambda_{\omega_{\epsilon }}F_{\hat{H}}|_{\hat{H}}+|\phi |_{\hat{H}, \omega_{\epsilon}}^{2})\frac{\omega_{\epsilon}^{n}}{n!}\leq \hat{C}_{0},
\end{equation}
for all $0< \epsilon \leq 1$. This also gives a uniform bound on $ \int_{\tilde{M}} |\Lambda_{\omega_{\epsilon }}(F_{\hat{H}}+[\phi , \phi ^{\ast \hat{H}}])|_{\hat{H}}\frac{\omega_{\epsilon}^{n}}{n!}$.
We study the following evolution equation on Higgs bundle $(E, \phi )$ with the fixed initial metric $\hat{H}$ and with respect to the K\"ahler metric $\omega_{\epsilon}$, \begin{equation}\label{DDD1} \left \{\begin{split} &H_{\epsilon}(t)^{-1}\frac{\partial H_{\epsilon}(t)}{\partial t}=-2(\sqrt{-1}\Lambda_{\omega_{\epsilon}}(F_{H_{\epsilon}(t)}+[\phi , \phi ^{\ast H_{\epsilon}(t)}])-\lambda_{\epsilon }\mathrm{Id}_{E}),\\ &H_{\epsilon}(0)=\hat{H},\\ \end{split} \right. \end{equation}
where $\lambda_{\epsilon} =\frac{2\pi}{\mathrm{Vol}(\tilde{M}, \omega_{\epsilon})} \mu_{\omega_{\epsilon}} (E)$. Simpson (\cite{Si}) proved the existence of long time solution of the above heat flow. By the standard parabolic estimates and the uniform upper bound of the heat kernels $K_{\epsilon}(t, x, y)$, we know that $|\Lambda_{\omega_{\epsilon }}(F_{H_{\epsilon}(t)}+[\phi , \phi ^{\ast H_{\epsilon}(t)}])|_{H_{\epsilon}(t)}$ has a uniform $L^{1}$ bound for $t\geq 0$ and a uniform $L^{\infty}$ bound for $t\geq t_{0}>0$. As in \cite{BS}, taking the limit as $\epsilon \rightarrow 0$, we have a long time solution $H(t)$ of the following evolution equation on $M\setminus \Sigma \times [0, +\infty)$, i.e. $H(t)$ satisfies: \begin{equation}\label{SSS1} \left \{\begin{split} &H(t)^{-1}\frac{\partial H(t)}{\partial t}=-2(\sqrt{-1}\Lambda_{\omega}(F_{H(t)}+[\phi , \phi ^{\ast H(t)}])-\lambda \mathrm{Id}_{\mathcal{E}}),\\ &H(0)=\hat{H}.\\ \end{split} \right. \end{equation} Here $H(t)$ can be seen as a Hermitian metric defined on the locally free part of $\mathcal{E}$, i.e. on $M\setminus \Sigma$.
In order to get the admissibility of Hermitian metric $H(t)$ for positive time $t>0$, we should show that $|\phi|_{H(t), \omega }\in L^{\infty}$ for $t>0$. In fact, we can prove that $|\phi|_{H(t), \omega }$ has a uniform $L^{\infty}$ bound for $t\geq t_{0}>0$. In \cite{LZ1}, by using the maximum principle, we proved this uniform $L^{\infty}$ bound of $|\phi|_{H(t), \omega }$ along the evolution equation for the Higgs bundle case. In the Higgs sheaf case, since the equation (\ref{SSS1}) has singularity on $\Sigma $, we can not use the maximum principle directly. So we need new argument to get a uniform $L^{\infty}$ bound of $|\phi|_{H(t), \omega }$, see section 3 for details.
The key part in the proof of Theorem \ref{thm 1.1} is to prove the existence of admissible approximate Hermitian-Einstein structure on a semi-stable reflexive Higgs sheaf. The Bogomolov type inequality (\ref{Bog1}) is an application. In fact, we prove that if the reflexive Higgs sheaf $(\mathcal{E}, \phi )$ is semi-stable, along the evolution equation (\ref{SSS1}), we must have \begin{equation}\label{SS01}
\sup _{x\in M\setminus \Sigma } |\sqrt{-1}\Lambda_{\omega }(F_{H(t)}+[\phi , \phi^{\ast H(t)}])-\lambda \mathrm{Id}_{\mathcal{E}}|_{H(t)}(x)\rightarrow 0, \end{equation}
as $t\rightarrow +\infty $. We prove (\ref{SS01}) by contradiction, if not, we can construct a saturated Higgs subsheaf such that its $\omega $-slope is greater than $\mu_{\epsilon}(\mathcal{E})$. Since the singularity set $\Sigma $ is a complex analytic subset with co-dimension at least $3$, it is easy to show that $(M\setminus \Sigma , \omega )$ satisfies all three assumptions that Simpson (\cite{Si}) imposes on the non-compact base K\"ahler manifold. Let's recall Simpson's argument for a Higgs bundle in the case where the base K\"ahler manifold is non-compact. Simpson assumes that there exists a good initial Hermitian metric $K$ satisfying $\sup_{M\setminus \Sigma}|\Lambda_{\omega}F_{K, \phi }|_{K}<\infty $, then he defines the analytic stability for $(\mathcal{E}, \phi , K)$ by using the Chern-Weil formula with respect to the metric $K$ (Lemma 3.2 in \cite{Si}). Under the $K$-analytic stability condition, he constructs a Hermitian-Einstein metric for the Higgs bundle by limiting the evolution equation (\ref{SSS1}).
Here, we have to pay more attention to the analytic stability (or semi-stability) of $(\mathcal{E}, \phi )$.
Let $\mathcal{F}$ be a saturated sub-sheaf of $\mathcal{E}$, we know that $\mathcal{F}$ can be seen as a sub-bundle of $\mathcal{E}$ outside a singularity set $V=\Sigma_{\mathcal{F}}\cup \Sigma $ of codimension at least $2$, then $\hat{H}$ induces a Hermitian metric $\hat{H}_{\mathcal{F}}$ on $\mathcal{F}$. Bruasse (Proposition 4.1 in \cite{Br}) had proved the following Chern-Weil formula
\begin{eqnarray}\label{CW1}
\deg_{\omega }(\mathcal{F})=\int_{M\setminus V} c_{1}(\mathcal{F}, \hat{H}_{\mathcal{F}})\wedge \frac{\omega^{n-1}}{(n-1)!},
\end{eqnarray}
where $c_{1}(\mathcal{F}, \hat{H}_{\mathcal{F}})$ is the first Chern form with respect to the induced metric $\hat{H}_{\mathcal{F}}$. By (\ref{CW1}), we see that the stability (semi-stability ) of the reflexive Higgs sheaf $(\mathcal{E}, \phi )$ is equivalent to the analytic stability (semi-stability) with respect to the metric $\hat{H}$ in Simpson's sense. But, we are not clear whether the above Chern-Weil formula is still valid if the metric $\hat{H}$ is replaced by an admissible metric $H(t)$ ($t>0$). So, the stability (or semi-stability) of the reflexive Higgs sheaf $(\mathcal{E}, \phi )$ may not imply the analytic stability (or semi-stability ) with respect to the metric $H(t)$ ($t>0$). The admissible metric $H(t)$ ($t>0$) can not be chosen as a good initial metric in Simpson's sense. On the other hand, the initial metric $\hat{H}$ may not satisfy the curvature finiteness condition (i.e. $|\Lambda_{\omega}F_{\hat{H}, \phi }|_{\hat{H}}$ may not be $L^{\infty}$ bounded), so we should modify Simpson's argument in our case, see the proof of Proposition \ref{prop 4.1} in section 4 for details.
If the reflexive Higgs sheaf $(\mathcal{E} , \phi )$ is $\omega $-stable, it is well known that the pulling back Higgs bundle $(E, \phi )$ is $\omega_{\epsilon }$-stable for sufficiently small $\epsilon$. By Simpson's result (\cite{Si}), there exists an $\omega_{\epsilon }$-Hermitian-Einstein metric $H_{\epsilon}$ for every small $\epsilon$. In \cite{BS}, Bando and Siu point out that it is possible to get an $\omega$-Hermitian-Einstein metric $H$ on the reflexive Higgs sheaf $(\mathcal{E} , \phi )$ as a limit of $\omega_{\epsilon }$-Hermitian-Einstein metric $H_{\epsilon}$ of Higgs bundle $(E, \phi )$ on $\tilde{M}$ as $\epsilon \rightarrow 0$. In the end of this paper, we solve this problem.
\begin{theorem}\label{thm 1.2} Let $H_{\epsilon}$ be an $\omega_{\epsilon }$-Hermitian-Einstein metric on the Higgs bundle $(E, \phi )$, by choosing a subsequence and rescaling it, $H_{\epsilon}$ must converge to an $\omega$-Hermitian-Einstein metric $H$ in local $C^{\infty}$-topology outside the exceptional divisor $\tilde{\Sigma }$ as $\epsilon \rightarrow 0$. \end{theorem}
This paper is organized as follows. In Section 2, we recall some basic estimates for the heat flow (\ref{DDD1}) and give proofs for local uniform $C^{0}$, $C^{1}$ and higher order estimates for reader's convenience. In section 3, we give a uniform $L^{\infty}$ bound for the norm of the Higgs field along the heat flow (\ref{SSS1}). In section 4, we prove the existence of admissible approximate Hermitian-Einstein structure on the semi-stable reflexive Higgs sheaf and complete the proof of Theorem \ref{thm 1.1}. In section 5, we prove Theorem \ref{thm 1.2}.
\hspace{0.3cm}
\section{Analytic preliminaries and basic estimates } \setcounter{equation}{0}
Let $(M, \omega )$ be a compact K\"ahler manifold of complex dimension $n$, and $(\mathcal{E}, \phi )$ be a reflexive Higgs sheaf on $M$ with the singularity set $\Sigma $. There exists a bow-up $\pi : \tilde{M}\rightarrow M$ such that the pulling back Higgs bundle $(E, \phi )$ on $\tilde{M}$ is isomorphic to $(\mathcal{E}, \phi )$ outside the exceptional divisor $\tilde{\Sigma }=\pi^{-1}\Sigma $. It is well known that $\tilde{M}$ is also K\"ahler (\cite{GH}). Fix a K\"ahler metric $\eta $ on $\tilde{M}$ and set $\omega_{\epsilon }=\pi^{\ast}\omega +\epsilon \eta $
for $0<\epsilon \leq 1$. Let $K_{\epsilon}(x, y, t)$ be the heat kernel with respect to the K\"ahler metric $\omega_{\epsilon}$. Bando and Siu (Lemma 3 in \cite{BS}) obtained a uniform Sobolev inequality for $(\tilde{M}, \omega_{\epsilon})$. Combining Cheng and Li's estimate (\cite{CL}) with Grigor'yan's result (Theorem 1.1 in \cite{Gr}), we have the following uniform upper bound of the heat kernels, furthermore, we also have a uniform lower bound of the Green functions.
\begin{proposition}\label{Prop 2.1}{\bf(Proposition 2 in \cite{BS})} Let $K_{\epsilon}$ be the heat kernel with respect to the metric $\omega_{\epsilon}$, then for any $\tau >0$, there exists a constant $C_{K}(\tau)$ which is independent of $\epsilon $, such that \begin{equation}\label{kernel01}0\leq K_{\epsilon}(x , y, t)\leq C_{K}(\tau) (t^{-n}\exp{(-\frac{(d_{\omega_{\epsilon}}(x, y))^{2}}{(4+\tau )t})}+1)\end{equation} for every $x, y\in \tilde{M}$ and $0<t < +\infty$, where $d_{\omega_{\epsilon}}(x, y)$ is the distance between $x$ and $y$ with respect to the metric $\omega_{\epsilon}$. There also exists a constant $C_{G}$ such that \begin{equation}\label{green} G_{\epsilon}(x , y)\geq -C_{G}\end{equation} for every $x, y\in \tilde{M}$ and $0<\epsilon \leq 1$, where $G_{\epsilon}$ is the Green function with respect to the metric $\omega_{\epsilon}$. \end{proposition}
Let $H_{\epsilon } (t)$ be the long time solutions of the heat flow (\ref{DDD1}) on the Higgs bundle $(E, \phi )$ with the fixed smooth initial metric $\hat{H}$ and with respect to the K\"ahler metric $\omega_{\epsilon}$. By (\ref{initial1}), there is a constant $\hat{C}_{1}$ independent of $\epsilon $ such that \begin{equation}
\int_{\tilde{M}} |\sqrt{-1}\Lambda_{\omega_{\epsilon }}(F_{\hat{H}}+[\phi , \phi ^{\ast \hat{H}}])-\lambda_{\epsilon }\mathrm{Id}_{E}|_{\hat{H}}\frac{\omega_{\epsilon}^{n}}{n!}\leq \hat{C}_{1}. \end{equation}
For simplicity, we set: \begin{equation} \Phi (H_{\epsilon}(t), \omega_{\epsilon})=\sqrt{-1}\Lambda_{\omega_{\epsilon}}(F_{H_{\epsilon}(t)}+[\phi , \phi ^{\ast H_{\epsilon}(t)}])-\lambda_{\epsilon }\mathrm{Id}_{E} . \end{equation} The following estimates are essentially proved by Simpson (Lemma 6.1 in \cite{Si}, see also Lemma 4 in \cite{LZ2}). Along the heat flow (\ref{DDD1}), we have:
\begin{equation}\label{F1} (\Delta_{\epsilon}-\frac{\partial }{\partial t})\mbox{\rm tr\,} (\Phi (H_{\epsilon}(t), \omega_{\epsilon}))=0, \end{equation} \begin{equation}\label{F2}
(\Delta_{\epsilon}-\frac{\partial }{\partial t} )|\Phi (H_{\epsilon}(t), \omega_{\epsilon})|_{H_{\epsilon}(t)}^{2}=2|D_{H_{\epsilon}, \phi} (\Phi (H_{\epsilon}(t), \omega_{\epsilon}))|^{2}_{H_{\epsilon}(t), \omega_\epsilon}, \end{equation} and \begin{equation}\label{H00}
(\Delta_{\epsilon } -\frac{\partial }{\partial t}) |\Phi (H_{\epsilon}(t), \omega_{\epsilon})|_{H_{\epsilon}(t)}\geq 0. \end{equation} Then, for $t>0$, \begin{equation}\label{H001}
\int_{\tilde{M}}|\Phi (H_{\epsilon}(t), \omega_{\epsilon}) |_{H_{\epsilon}(t)}\frac{\omega_{\epsilon }^{n}}{n!}\leq \int_{\tilde{M}}|\Phi (\hat{H}, \omega_{\epsilon}) |_{\hat{H}}\frac{\omega_{\epsilon }^{n}}{n!}\leq \hat{C}_{1}, \end{equation}
\begin{equation}\label{H000}
\max_{x\in \tilde{M}}|\Phi (H_{\epsilon}(t), \omega_{\epsilon}) |_{H_{\epsilon}(t)}(x) \leq \int_{\tilde{M}}K_{\epsilon} (x, y, t)|\Phi (\hat{H}, \omega_{\epsilon}) |_{\hat{H}}\frac{\omega_{\epsilon }^{n}}{n!} , \end{equation} and \begin{equation}\label{H0001}
\max_{x\in \tilde{M}}|\Phi (H_{\epsilon}(t+1), \omega_{\epsilon}) |_{H_{\epsilon}(t+1)}(x) \leq \int_{\tilde{M}}K_{\epsilon} (x, y, 1)|\Phi (H_{\epsilon}(t), \omega_{\epsilon}) |_{H_{\epsilon}(t)}\frac{\omega_{\epsilon }^{n}}{n!} . \end{equation}
By the upper bound of the heat kernels (\ref{kernel01}), we have
\begin{equation}\label{H005}
\max_{x\in \tilde{M}}|\Phi (H_{\epsilon}(t), \omega_{\epsilon}) |_{H_{\epsilon}(t)}(x) \leq C_{K}(\tau )\hat{C}_{1} (t^{-n}+1), \end{equation} and \begin{equation}\label{H00001}
\max_{x\in \tilde{M}}|\Phi (H_{\epsilon}(t+1), \omega_{\epsilon}) |_{H_{\epsilon}(t+1)}(x) \leq 2C_{K}(\tau )\int_{\tilde{M}}|\Phi (H_{\epsilon}(t), \omega_{\epsilon}) |_{H_{\epsilon}(t)}\frac{\omega_{\epsilon }^{n}}{n!} . \end{equation}
Set \begin{equation}\exp (S_{\epsilon}(t))=h_{\epsilon}(t)=\hat{H}^{-1}H_{\epsilon }(t),\end{equation}
where $S_{\epsilon}(t)\in \mathrm{End}(E)$ is self-adjoint with respect to $\hat{H} $ and $H_{\epsilon}(t)$.
By the heat flow (\ref{DDD1}), we have:
\begin{equation}\frac{\partial }{\partial t} \log \det(h_\epsilon(t))=\mbox{\rm tr\,} (h_\epsilon^{-1}\frac{\partial h_\epsilon}{\partial t})=-2\mbox{\rm tr\,} (\Phi (H_{\epsilon}(t), \omega_{\epsilon})),\end{equation} and \begin{eqnarray}\label{1} \int_{\tilde{M}}\mbox{\rm tr\,} (S_{\epsilon}(t))\frac{\omega_{\epsilon }^{n}}{n!} =\int_{\tilde{M}}\log \det (h_{\epsilon}(t))\frac{\omega_{\epsilon }^{n}}{n!}=0 \end{eqnarray} for all $t\geq 0$.
In the following, we denote: \begin{equation}
B_{\omega_{1}}(\delta )=\{x\in \tilde{M}|d_{\omega_{1}}(x, \Sigma )<\delta \}, \end{equation} where $d_{\omega_{1}}$ is the distance function with respect to the K\"ahler metric $\omega_{1}$. Since $\hat{H}$ is a smooth Hermitian metric on $E$, $\phi \in \Omega^{1,0}_{\tilde{M}}(\mathrm{End}(E))$ is a smooth field, and $\pi^\ast\omega$ is degenerate only along $\Sigma$, there exist constants $\hat{c}(\delta^{-1})$ and $\hat{b}_k(\delta^{-1})$ such that \begin{equation}\label{initial2} \begin{split}
&\{|\Lambda_{\omega_{\epsilon}}F_{\hat{H}}|_{\hat{H}}+|\phi|_{\hat{H}, \omega_{\epsilon}}^{2}\}(y)\leq \hat{c}(\delta^{-1}), \\
&\{|\nabla_{\hat{H}}^kF_{\hat{H}}|_{\hat{H}, \omega_{\epsilon}}^{2}+|\nabla_{\hat{H}}^{k+1}\phi|_{\hat{H}, \omega_{\epsilon}}^{2}\}\leq \hat{b}_k(\delta^{-1}),\\ \end{split} \end{equation} for all $y\in \tilde{M}\setminus B_{\omega_1}(\frac{\delta}{2})$, all $0\leq \epsilon \leq 1$ and all $k\geq 0$.
In order to get a uniform local $C^0$-estimate of $h_{\epsilon}(t)$, We first prove that $|\Phi (H_{\epsilon}(t), \omega_{\epsilon})|_{H_{\epsilon}(t)}$ is uniform locally bounded, i.e. we obtain the following Lemma.
\begin{lemma}\label{lem 2.2} There exists a constant $\tilde{C}_{1}(\delta ^{-1})$ such that \begin{equation}\label{220}
|\Phi (H_{\epsilon}(t), \omega_{\epsilon})|_{H_{\epsilon}(t)}(x)\leq \tilde{C}_{1}(\delta ^{-1}) \end{equation} for all $(x, t)\in (\tilde{M}\setminus B_{\omega_1}(\delta))\times [0, \infty )$, and all $0< \epsilon \leq 1$. \end{lemma}
{\bf Proof. }Using the inequality (\ref{H000}), we have \begin{equation}\label{221}
|\Phi (H_{\epsilon}(t), \omega_{\epsilon})|_{H_{\epsilon}(t)}(x)\leq \Big(\int_{M\setminus B_\epsilon(\frac{\delta}{2})}+\int_{B_\epsilon(\frac{\delta}{2})}\Big)K_\epsilon(x, y, t)|\Phi (\hat{H}, \omega_{\epsilon})|_{\hat{H}}(y)\frac{\omega_{\epsilon}^n(y)}{n!}. \end{equation} Noting $\int_{\tilde{M}}K_\epsilon(x, y, t)\frac{\omega_{\epsilon}^n}{n!}=1$ and using (\ref{initial2}), we have \begin{equation}\label{222} \begin{split}
&\int_{\tilde{M}\setminus B_\epsilon(\frac{\delta}{2})}K_\epsilon(x, y, t)|\Phi (\hat{H}, \omega_{\epsilon})|_{\hat{H}}(y)\frac{\omega_{\epsilon}^n}{n!}\\ \leq&\ (\hat{c}(\delta^{-1})+ \lambda_\epsilon\sqrt{r})\int_{\tilde{M}}K_\epsilon(x, y, t)\frac{\omega_{\epsilon}^n(y)}{n!}\\ \leq&\ \hat{c}_1(\delta^{-1}).\\ \end{split} \end{equation} where $\hat{c}_1(\delta^{-1})$ is a constant independent of $\epsilon$. Since $\pi^\ast\omega$ is degenerate only along $\Sigma$, there exists a constant $\tilde{a}(\delta)$ such that \begin{equation}\label{metric02} \tilde{a}(\delta)\omega_1< \pi^\ast\omega < \omega_\epsilon < \omega_1 \end{equation} on $\tilde{M}\setminus B_{\omega_1}(\frac{\delta}{4})$, for all $0< \epsilon \leq 1$. Let $x \in \tilde{M}\setminus B_{\omega_1}(\delta)$ and $y \in \partial (B_{\omega_1}(\frac{\delta}{2}))$, it is clear that \begin{equation} d_{\omega_\epsilon}(x, y)\geq d_{\pi^\ast\omega}(x, y)> \sqrt{\tilde{a}(\delta)}d_{\omega_1}(x, y)\geq \frac{\delta\sqrt{\tilde{a}(\delta)}}{2}. \end{equation} Let $a(\delta)=\frac{\delta\sqrt{\tilde{a}(\delta)}}{2}$. If $x\in \tilde{M}\setminus B_{\omega_1}(\delta)$ and $y\in B_{\omega_1}(\frac{\delta}{2})$, we have \begin{equation} d_{\omega_\epsilon}(x, y)\geq a(\delta) \end{equation} for all $0\leq \epsilon \leq 1$. Then, \begin{equation}\label{223} \begin{split}
&\int_{B_{\omega_1}(\frac{\delta}{2})}K_\epsilon(x, y, t)|\Phi (\hat{H}, \omega_{\epsilon})|_{\hat{H}}(y)\frac{\omega_{\epsilon}^n(y)}{n!}\\
\leq &\ C_k(\tau )\int_{B_{\omega_1}(\frac{\delta}{2})}(t^{-n}\exp(-\frac{d_{\omega_\epsilon}(x, y)}{(4+\tau)t})+1)|\Phi (\hat{H}, \omega_{\epsilon})|_{\hat{H}}(y)\frac{\omega_{\epsilon}^n(y)}{n!}\\ \leq &\ C_k(\tau )\int_{B_{\omega_1}(\frac{\delta}{2})}(t^{-n}\exp(-\frac{a(\delta)}{(4+\tau)t})+1)
|\Phi (\hat{H}, \omega_{\epsilon})|_{\hat{H}}\frac{\omega_{\epsilon}^n}{n!}\\ \leq &\ C_k(\tau )\big(\frac{a(\delta)}{4+\tau}n\big)^{-n}\exp(-n)\int_{B_{\omega_1}(\frac{\delta}{2})}
|\Phi (\hat{H}, \omega_{\epsilon})|_{\hat{H}}\frac{\omega_{\epsilon}^n}{n!}\\ \leq &\ C_k(\tau )\hat{C}_{1}\big(\frac{a(\delta)}{4+\tau}n\big)^{-n}\exp(-n),\\ \end{split} \end{equation} for all $(x, t)\in (\tilde{M}\setminus B_{\omega_1}(\delta))\times [0, \infty )$. It is obvious that (\ref{221}), (\ref{222}) and (\ref{223}) imply (\ref{220}).
$\Box$ \\
By a direct calculation, we have \begin{equation}\label{c01} \begin{split} &\frac{\partial}{\partial t}\log(\mbox{\rm tr\,} h_\epsilon(t)+\mbox{\rm tr\,} h_\epsilon^{-1}(t))\\ =&\ \frac{\mbox{\rm tr\,}(h_\epsilon(t)\cdot h_\epsilon^{-1}(t)\frac{\partial h_\epsilon(t)}{\partial t})-\mbox{\rm tr\,}(h_\epsilon^{-1}(t)\frac{\partial h_\epsilon(t)}{\partial t}\cdot h_\epsilon^{-1}(t))}{\mbox{\rm tr\,} h_\epsilon(t)+\mbox{\rm tr\,} h_\epsilon^{-1}(t)}\\
\leq &\ 2|\Phi (H_{\epsilon}(t), \omega_{\epsilon})|_{H_\epsilon(t)}, \end{split} \end{equation} and \begin{equation}\label{c02}
\log (\frac{1}{2r}(\mbox{\rm tr\,} h_\epsilon(t) + \mbox{\rm tr\,} h_\epsilon(t)^{-1}))\leq |S_\epsilon(t)|_{\hat{H}}\leq r^{\frac{1}{2}}\log (\mbox{\rm tr\,} h_\epsilon(t) + \mbox{\rm tr\,} h_\epsilon(t)^{-1}), \end{equation} where $r=\mathrm{rank} (E)$. By (\ref{H001}) and (\ref{220}), we have \begin{equation}\label{C0a} \int_{\tilde{M}}\log(\mbox{\rm tr\,} h_\epsilon(t)+\mbox{\rm tr\,} h_\epsilon^{-1}(t))-\log(2r)\frac{\omega_{\epsilon}^{n}}{n!}\leq \hat{C}_{1}t, \end{equation} and \begin{equation}\label{C0b} \log(\mbox{\rm tr\,} h_\epsilon(t)+\mbox{\rm tr\,} h_\epsilon^{-1}(t))-\log(2r)\leq 2\tilde{C}_{1}(\delta^{-1})T \end{equation} for all $(x, t) \in (\tilde{M}\setminus B_{\omega_1}(\delta))\times [0, T]$. Then, we have the following local $C^{0}$-estimate of $h_{\epsilon}(t)$.
\begin{lemma}\label{lem 2.3} There exists a constant $\overline{C}_{0}(\delta^{-1}, T)$ which is independent of $\epsilon$ such that \begin{equation}\label{C01}
|S_{\epsilon}(t)|_{\hat{H}}(x)\leq \overline{C}_{0}(\delta^{-1}, T) \end{equation} for all $(x, t) \in (\tilde{M}\setminus B_{\omega_1}(\delta))\times [0, T]$, and all $0< \epsilon \leq 1$. \end{lemma}
In the following lemma, we derive a local $C^{1}$-estimate of $h_{\epsilon }(t)$.
\begin{lemma}\label{lem 2.4} Let $T_\epsilon(t)=h_\epsilon^{-1}(t)\partial_{\hat{H}}h_\epsilon(t)$. Assume that there exists a constant $\overline{C}_{0}$ such that \begin{equation}\label{C02}
\max_{(x, t)\in (\tilde{M}\setminus B_{\omega_1}(\delta))\times [0, T]}|S_{\epsilon}(t)|_{\hat{H}}(x)\leq \overline{C}_{0}, \end{equation} for all $0< \epsilon \leq 1$. Then, there exists a constant $\overline{C}_{1}$ depending only on $\overline{C}_{0}$ and $\delta ^{-1}$ such that \begin{equation}\label{C11}
\max_{(x, t)\in (\tilde{M}\setminus B_{\omega_1}(\frac{3}{2}\delta))\times [0, T]}|T_{\epsilon}(t)|_{\hat{H}, \omega_{\epsilon}}\leq \overline{C}_{1} \end{equation} for all $0< \epsilon \leq 1$. \end{lemma}
{\bf Proof. } By a direct calculation, we have \begin{equation}\label{c1002} \begin{split} &(\Delta_\epsilon-\frac{\partial}{\partial t})\mbox{\rm tr\,} h_\epsilon(t)\\ =&\ 2 \mbox{\rm tr\,} (-\sqrt{-1}\Lambda_{\omega_{\epsilon}}\overline{\partial}h_\epsilon(t)\cdot h_\epsilon^{-1}(t)\cdot\partial_{\hat{H}}h_\epsilon(t))+2\mbox{\rm tr\,}(h_{\epsilon}(t)\Phi (\hat{H}, \omega_{\epsilon})) \\
& +2\sqrt{-1}\Lambda_{\omega_{\epsilon}}\mbox{\rm tr\,} \{h_{\epsilon}(t)\circ ([\phi , \phi ^{\ast H_{\epsilon}(t)}]-[\phi , \phi ^{\ast \hat{H}}])\}\\ =&\ 2 \mbox{\rm tr\,} (-\sqrt{-1}\Lambda_{\omega_{\epsilon}}\overline{\partial}h_\epsilon(t)\cdot h_\epsilon^{-1}(t)\cdot\partial_{\hat{H}}h_\epsilon(t))+2\mbox{\rm tr\,}(h_{\epsilon}(t)\Phi (\hat{H}, \omega_{\epsilon})) \\
& +2\sqrt{-1}\Lambda_{\omega_{\epsilon}}\mbox{\rm tr\,} \{[\phi , h_{\epsilon}(t)]\wedge h_{\epsilon}^{-1}(t)[h_{\epsilon}(t) , \phi ^{\ast \hat{H}}]\}\\ \geq &\ 2 \mbox{\rm tr\,} (-\sqrt{-1}\Lambda_{\omega_{\epsilon}}\overline{\partial}h_\epsilon(t)\cdot h_\epsilon^{-1}(t)\cdot\partial_{\hat{H}}h_\epsilon(t))+2\mbox{\rm tr\,}(h_{\epsilon}(t)\Phi (\hat{H}, \omega_{\epsilon})), \\ \end{split} \end{equation} \begin{equation} \frac{\partial }{\partial t}T_\epsilon(t) =\partial_{H_{\epsilon}(t)}(h_\epsilon^{-1}(t)\frac{\partial }{\partial t}h_\epsilon(t))=-2\partial_{H_{\epsilon}(t)}(\Phi (H_{\epsilon}(t), \omega_{\epsilon})), \end{equation} and \begin{equation}\label{c1001} \begin{split}
&(\Delta_\epsilon-\frac{\partial}{\partial t})|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}} \geq 2|\nabla_{H_\epsilon(t)}T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}\\
& -\check{C}_1(|\Lambda_{\omega_{\epsilon}}F_{H_{\epsilon}(t)}|_{H_\epsilon(t)}+|F_{\hat{H}}|_{H_\epsilon(t), \omega_{\epsilon}}+|\phi|_{H_\epsilon(t), \omega_{\epsilon}}^{2}
+|Ric(\omega_{\epsilon})|_{\omega_{\epsilon}})|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}\\
& -\check{C}_2|\nabla_{\hat{H}}(\Lambda_{\omega_{\epsilon}}F_{\hat{H}})|_{H_\epsilon(t), \omega_{\epsilon}}|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon} }-|\nabla_{\hat{H}}\phi |_{H_\epsilon(t), \omega_{\epsilon}}^{2},\\ \end{split} \end{equation} where constants $\check{C}_1, \check{C}_2$ depend only on the dimension $n$ and the rank $r$.
By the local $C^{0}$-assumption (\ref{C02}), the local estimate (\ref{220}) and the definition of $\omega_\epsilon$, it is easy to see that all coefficients in the right term of (\ref{c1001}) are uniformly local bounded outside $\tilde{\Sigma }$. Then there exists a constant $\check{C}_3$ depending only on $\delta^{-1}$ and $\overline{C}_{0}$ such that \begin{equation}\label{c10003} \begin{split}
(\Delta_\epsilon-\frac{\partial}{\partial t})|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}} \geq&\ 2|\nabla_{H_\epsilon(t)}T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}\\
& -\check{C}_3|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}-\check{C}_3 \\ \end{split} \end{equation} on the domain $\tilde{M}\setminus B_{\omega_1}(\delta)\times [0, T]$.
Let $\varphi_1$, $\varphi_2$ be nonnegative cut-off functions satisfying: \begin{equation} \varphi_1(x)=\left\{ \begin{array}{ll} 0, & x\in B_{\omega_1}(\frac{5}{4}\delta),\\ 1, & x\in \tilde{M}\setminus B_{\omega_1}(\frac{3}{2}\delta), \end{array} \right. \end{equation}
\begin{equation} \varphi_2(x)=\left\{ \begin{array}{ll} 0, & x\in B_{\omega_1}(\delta),\\ 1, & x\in \tilde{M}\setminus B_{\omega_1}(\frac{5}{4}\delta), \end{array} \right. \end{equation}
and $|d\varphi_i|_{\omega_1}^2\leq \frac{8}{\delta^2}$, $-\frac{c}{\delta^2}\omega_1\leq \sqrt{-1}\partial\bar\partial\varphi_i\leq \frac{c}{\delta^2}\omega_1$. By the inequality (\ref{metric02}), there exists a constant $C_1(\delta^{-1})$ depending only on $\delta^{-1}$ such that \begin{equation}
(|d\varphi_i|^2_{\omega_\epsilon}+|\Delta_\epsilon\varphi_i|)\leq C_1(\delta^{-1}), \end{equation} for all $0< \epsilon \leq 1$.
We consider the following test function \begin{equation}
f(\cdot, t)=\varphi_1^2|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}^2+W\varphi_2^2 \mbox{\rm tr\,} h_\epsilon(t), \end{equation} where the constant $W$ will be chosen large enough later. From (\ref{c1002}) and (\ref{c1001}), we have \begin{equation} \begin{split} &(\Delta_\epsilon-\frac{\partial}{\partial t})f\\
=&\ \varphi_1^2(2|\nabla_{H_\epsilon(t)}T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}
-\check{C}_3|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}-\check{C}_3+\Delta_{\omega_{\epsilon}}\varphi_1^2|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}\\
&+4\langle\varphi_1\nabla\varphi_1, \nabla|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}^2\rangle_{\omega_{\epsilon}}+W\Delta_{\omega_{\epsilon}}\varphi_2^2\mbox{\rm tr\,} h_\epsilon(t)+ 4W\langle\varphi_2\nabla\varphi_2, \nabla\mbox{\rm tr\,} h_\epsilon(t)\rangle_{\omega_{\epsilon}}\\ &+2W\varphi_2^2(\mbox{\rm tr\,} (\sqrt{-1}\Lambda_{\omega_\epsilon}h_\epsilon^{-1}(t)\partial_{\hat{H}}h_\epsilon(t)\bar{\partial}h_\epsilon(t)))+\mbox{\rm tr\,} (h_\epsilon(t)(\Phi (\hat{H}, \omega_{\epsilon}))).\\ \end{split} \end{equation} We use \begin{equation} \begin{split}
2\langle \varphi_1\nabla\varphi_1, \nabla|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}^2 \rangle_{\omega_\epsilon}&\geq -4\varphi_1|\nabla\varphi_1|_{\omega_\epsilon}|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}|\nabla_{H_\epsilon(t)}T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}\\
&\geq -\varphi_1^2|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}^2-4|\nabla\varphi_1|_{\omega_{\epsilon}}^2|T_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}^2,\\ \end{split} \end{equation} \begin{equation}
W\langle \varphi_2\nabla\varphi_2, \nabla\mbox{\rm tr\,} h_\epsilon(t) \rangle_{\omega_\epsilon} \geq -\varphi_2^2|\nabla\mbox{\rm tr\,} h_\epsilon(t)|_{H_\epsilon(t), \omega_{\epsilon}}^2-W^{2}|\nabla\varphi_2|_{\omega_\epsilon}^2, \end{equation} and \begin{equation} \begin{split}
&|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon }}\\ =&\ \mbox{\rm tr\,}(\sqrt{-1}\Lambda_{\omega_\epsilon}h_\epsilon^{-1}(t)\partial_{\hat{H}} h_\epsilon(t)H_\epsilon^{-1}(t)\overline{(h_\epsilon^{-1}(t)\partial_{\hat{H}}h_\epsilon(t))}^TH_\epsilon(t))\\ =&\ \mbox{\rm tr\,}(\sqrt{-1}\Lambda_{\omega_\epsilon}h_\epsilon^{-1}(t)\partial_{\hat{H}}h_\epsilon(t)h_\epsilon^{-1}(t)\bar{\partial}h_\epsilon(t))\\ \leq &\ e^{\overline{C}_{0}}\mbox{\rm tr\,}(\sqrt{-1}\Lambda_{\omega_\epsilon}h_\epsilon^{-1}(t)\partial_{\hat{H}}h_\epsilon(t)\bar{\partial}h_\epsilon(t)),\\ \end{split} \end{equation} and choose \begin{equation} W=(\check{C}_3+4C_1(\delta^{-1})+2r)e^{\overline{C}_{0}}+1. \end{equation} Then there exists a positive constant $\tilde{C}_0$ depending only on $\overline{C}_{0}$ and $\delta ^{-1}$ such that \begin{equation}\label{c101}
(\Delta_\epsilon-\frac{\partial}{\partial t})f\geq \varphi_1^2|\nabla_{H_\epsilon(t)}T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}+\varphi_2^2|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_{\epsilon}}-\tilde{C}_0 \end{equation} on $\tilde{M}\times [0, T]$. Let $f (q, t_{0})=\max_{\tilde{M}\times [0, T]}\eta$, by the definition of $\varphi_{i}$ and the uniform local $C^{0}$-assumption of $h_{\epsilon}(t)$, we can suppose that: $$ (q, t_{0}) \in \tilde{M}\setminus B_{\omega_1}(\frac{5}{4}\delta)\times (0, T]. $$ By the inequality (\ref{c101}), we have \begin{equation}
|T_\epsilon(t_{0})|^2_{H_\epsilon(t_{0}), \omega_{\epsilon }}(q)\leq \tilde{C}_0. \end{equation} So there exists a constant $\overline{C}_{1}$ depending only on $\overline{C}_{0}$ and $\delta ^{-1}$, such that \begin{equation}
|T_\epsilon(t)|^2_{H_\epsilon(t), \omega_\epsilon}(x)\leq \overline{C}_{1} \end{equation} for all $(x, t)\in \tilde{M}\setminus B_{\omega_1}(\frac{3}{2}\delta)\times [0, T]$ and all $0< \epsilon \leq 1$.
$\Box$ \\
One can get the local uniform $C^{\infty}$ estimates of $h_{\epsilon}(t)$ by the standard Schauder estimate of the parabolic equation after getting the local $C^{0}$ and $C^1$ estimates. But by applying the parabolic Schauder estimates, one can only get the uniform $C^{\infty}$ estimates of $h_{\epsilon} (t)$ on $\tilde{M}\setminus B_{\omega_1}(\delta)\times [\tau , T]$, where $\tau >0$ and the uniform estimates depend on $\tau ^{-1}$. In the following, we first use the maximum principle to get a local uniform bound on the curvature $|F_{H_{\epsilon}(t)}|_{H_{\epsilon}(t), \omega_{\epsilon }}$, then we apply the elliptic estimates to get local uniform $C^{\infty}$ estimates. The benefit of our argument is that we can get uniform $C^{\infty}$ estimates of $h_{\epsilon}(t)$ on $\tilde{M}\setminus B_{\omega_1}(\delta)\times [0, T]$. In the following, for simplicity, we denote
\begin{equation}\Xi_{\epsilon, j}=|\nabla_{H_{\epsilon }(t)}^{j}(F_{H_{\epsilon}(t)}+[\phi ,
\phi^{\ast H_{\epsilon}(t)}])|_{H_{\epsilon}(t), \omega_{\epsilon}}^{2}(x)+|\nabla_{H_{\epsilon}(t)}^{j+1} \phi |_{H_{\epsilon}(t), \omega_{\epsilon}}^{2}\end{equation} for $j=0, 1, \cdots $. Here $\nabla_{H_{\epsilon}(t)}$ denotes the covariant derivative with respect to the Chern connection $D_{H_{\epsilon}(t)}$ of $H_{\epsilon}(t)$ and the Riemannian connection $\nabla_{\omega_{\epsilon}}$ of $\omega_{\epsilon}$.
\begin{lemma}\label{lem 2.5} Assume that there exists a constant $\overline{C}_{0}$ such that \begin{equation}
\max_{(x, t)\in (\tilde{M}\setminus B_{\omega_1}(\delta))\times [0, T]}|S_{\epsilon}(t)|_{\hat{H}}(x)\leq \overline{C}_{0}, \end{equation} for all $0< \epsilon \leq 1$. Then, for every integer $k\geq 0$, there exists a constant $\overline{C}_{k+2}$ depending only on $\overline{C}_{0}$, $\delta ^{-1}$ and $k$, such that \begin{equation}\label{C11} \max_{(x, t)\in (\tilde{M}\setminus B_{\omega_1}(2\delta))\times [0, T]}\Xi_{\epsilon, k}\leq \overline{C}_{k+2} \end{equation} for all $0< \epsilon \leq 1$. Furthermore, there exist constants $\hat{C}_{k+2}$ depending only on $\overline{C}_{0}$, $\delta ^{-1}$ and $k$, such that \begin{equation}
\max_{(x, t)\in (\tilde{M}\setminus B_{\omega_1}(2\delta))\times [0, T]}|\nabla_{\hat{H}}^{k+2}h_{\epsilon}|_{\hat{H}, \omega_\epsilon}\leq \hat{C}_{k+2} \end{equation} for all $0< \epsilon \leq 1$. \end{lemma}
{\bf Proof. } By computing, we have the following inequalities (see Lemma 2.4 and Lemma 2.5 in (\cite{LZ1}) for details): \begin{equation}\label{cc01} \begin{split}
&(\Delta_{\epsilon } -\frac{\partial }{\partial t})|\nabla _{H_{\epsilon}(t)}\phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}-2|\nabla_{H_{\epsilon}(t)}\nabla_{H_{\epsilon}(t)}\phi |_{H_{\epsilon}(t), \omega_\epsilon}^{2}\\
\geq &-C_{7}(|F_{H_{\epsilon}(t)}|_{H_{\epsilon}(t), \omega_\epsilon}+|Rm(\omega_{\epsilon})|_{\omega_\epsilon}+|\phi |_{H_{\epsilon}(t), \omega_\epsilon}^{2})|\nabla_{H_{\epsilon}(t)} \phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}\\
&-C_{7}|\phi |_{H_{\epsilon}(t), \omega_\epsilon}|\nabla Ric (\omega_{\epsilon})|_{\omega_\epsilon}|\nabla_{H_{\epsilon}(t)} \phi|_{H_{\epsilon}(t), \omega_\epsilon},\\ \end{split} \end{equation} \begin{equation} \begin{split}
&(\Delta_{\epsilon} -\frac{\partial }{\partial t})|F_{H_{\epsilon}(t)}+[\phi ,
\phi^{\ast H_{\epsilon}(t)}]|_{H_{\epsilon}(t), \omega_\epsilon}^{2}-2|\nabla_{H_{\epsilon}(t)}(F_{H_{\epsilon}(t)}+[\phi , \phi^{\ast H_{\epsilon}(t)
}])|_{H_{\epsilon}(t), \omega_\epsilon}^{2} \\
\geq &-C_{8}(|F_{H_{\epsilon}(t)}+[\phi , \phi^{\ast H_{\epsilon}(t)}]|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|\nabla_{H_{\epsilon}(t)} \phi |_{H_{\epsilon}(t), \omega_\epsilon}^{2})^{\frac{3}{2}}\\
&-C_{8} (|\phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|Rm(\omega_{\epsilon})|_{\omega_\epsilon})(|F_{H_{\epsilon}(t)}+[\phi , \phi^{\ast H_{\epsilon}(t)}]|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|\nabla_{H_{\epsilon}(t)} \phi |_{H_{\epsilon}(t), \omega_\epsilon}^{2}),\\ \end{split} \end{equation} then \begin{equation}\label{cc02} \begin{split} (\Delta_{\epsilon} -\frac{\partial }{\partial t})\Xi_{\epsilon, 0} \geq&\ 2\Xi_{\epsilon, 1} -C_{8}(\Xi_{\epsilon, 0})^{\frac{3}{2}}\\
&-C_{8} (|\phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|Rm(\omega_{\epsilon})|_{\omega_\epsilon})(\Xi_{\epsilon, 0})-C_{8}|\nabla Ric (\omega_{\epsilon })|^{2}_{\omega_{\epsilon}},\\ \end{split} \end{equation} where $C_{7}$, $C_{8}$ are constants depending only on the complex dimension $n$ and the rank $r$. Furthermore, we have \begin{equation}\label{Fk} \begin{split} &(\Delta_{\epsilon}-\frac{\partial }{\partial t} )\Xi_{\epsilon, j}\\
\geq&\ 2\Xi_{\epsilon, j+1}-\acute{C}_{j}(\Xi_{\epsilon, j})^{\frac{1}{2}}\{\sum_{i+k=j}((\Xi_{\epsilon, i})^{\frac{1}{2}}+|\phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|Rm(\omega_{\epsilon})|_{\omega_{\epsilon}}+|\nabla Ric (\omega_{\epsilon })|_{\omega_{\epsilon}})\\ & \cdot
((\Xi_{\epsilon, k})^{\frac{1}{2}}+|\phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|Rm(\omega_{\epsilon})|_{\omega_{\epsilon}}+|\nabla Ric (\omega_{\epsilon })|_{\omega_{\epsilon}})\},\\ \end{split} \end{equation} where $\acute{C}_{j}$ is a positive constant depending only on the complex dimension $n$, the rank $r$ and $j$. Direct computations yield the following inequality (see (2.5) in (\cite{LZ1}) for details): \begin{equation}\label{phi002} \begin{split}
&(\Delta_{\epsilon}-\frac{\partial }{\partial t})|\phi|_{H_{\epsilon}(t),\omega_{\epsilon}}^{2}\geq 2|\nabla_{H_{\epsilon}(t)} \phi |_{H_{\epsilon}(t), \omega_{\epsilon}}^{2}\\
&+2|\Lambda_{\omega_{\epsilon}}[\phi , \phi^{\ast H_{\epsilon}(t)}]|_{H_{\epsilon}(t)}^{2}-2|Ric(\omega_{\epsilon})|_{\omega_{\epsilon}}|\phi|_{H_{\epsilon}(t),\omega_{\epsilon}}^{2}.\\ \end{split} \end{equation}
From the local $C^{0}$-assumption (\ref{C02}), we see that $|\phi|_{H_{\epsilon}(t), \omega_{\epsilon}}$ is also uniformly bounded on $\tilde{M}\setminus B_{\omega_1}(\delta)\times [0, T]$. By Lemma \ref{lem 2.4}, we have $|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_{\epsilon}}$ is uniformly bounded on $\tilde{M}\setminus B_{\omega_1}(\frac{3}{2}\delta)\times [0, T]$. We choose a constant $\hat{C}$ depending only on $\delta^{-1}$ and $\overline{C}_{0}$ such that \begin{equation}
\frac{1}{2}\hat{C}\leq \hat{C} -(|\phi|_{H_{\epsilon}(t), \omega_{\epsilon}}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_{\epsilon}}^{2})(x)\leq \hat{C} \end{equation} on $\tilde{M}\setminus B_{\omega_1}(\frac{3}{2}\delta)\times [0, T]$. We consider the test function: \begin{equation}
\zeta (x, t)=\rho^{2}\frac{\Xi_{\epsilon , 0}(x, t)}{\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_{\epsilon}}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_{\epsilon}}^{2})(x)}, \end{equation} where $\rho $ is a cut-off function satisfying: \begin{equation} \rho (x)=\left\{ \begin{array}{ll} 0, & x\in B_{\omega_1}(\frac{13}{8}\delta),\\ 1, & x\in \tilde{M}\setminus B_{\omega_1}(\frac{7}{4}\delta), \end{array} \right. \end{equation}
and $|d\rho |_{\omega_1}^2\leq \frac{8}{\delta^2}$, $-\frac{c}{\delta^2}\omega_1\leq \sqrt{-1}\partial\bar\partial\rho \leq \frac{c}{\delta^2}\omega_1$. We suppose $(x_{0}, t_{0})\in \tilde{M}\setminus B_{\omega_1}(\frac{3}{2}\delta)\times (0, T]$ is a maximum point of $\zeta $. Using (\ref{c10003}), (\ref{cc01}), (\ref{cc02}), (\ref{phi002}) and the fact $\nabla\zeta =0$ at the point $(x_{0}, t_{0})$, we have \begin{equation} \begin{split}
0 \geq&\ (\Delta_{\epsilon }-\frac{\partial }{\partial t})\zeta |_{(x_{0}, t_{0})}\\
=&\ \frac{1}{\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2})} (\Delta_{\epsilon }-\frac{\partial }{\partial t})(\rho^{2}\Xi_{\epsilon , 0}) \\& -\rho^{2}\frac{\Xi_{\epsilon , 0}}{(\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2}))^{2}}(\Delta_{\epsilon }-\frac{\partial }{\partial t})(\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2}))\\
& -\frac{2}{\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2})}\nabla (\zeta)\cdot \nabla (\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2}))\\
\geq&\ \frac{\Xi_{\epsilon , 0}}{(\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2}))^{2}}\{\rho^{2}\frac{2\Xi_{\epsilon , 0}-\check{C}_3 |T_\epsilon(t)|^2_{H_\epsilon(t), \omega_\epsilon}-\check{C}_3}{\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2})}\\
& -\rho^{2}\frac{2|Ric(\omega_{\epsilon})|_{\omega_\epsilon}|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}}{\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2})}\\
& -C_{8}\rho^{2} \Xi_{\epsilon , 0}^{\frac{1}{2}}-C_{8}\rho^{2} (|\phi
|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|Rm(\omega_{\epsilon})|_{\omega_\epsilon})-8|d\rho |_{\omega_{\epsilon}}^{2}+\Delta_{\omega_{\epsilon}}\rho^{2}\}\\
& -C_{8}\frac{\rho^{2}|\nabla Ric (\omega_{\epsilon})|_{\omega_{\epsilon}}^{2}}{\hat{C}-(|\phi|_{H_{\epsilon}(t), \omega_\epsilon}^{2}+|T_{\epsilon }(t)|_{H_{\epsilon}(t), \omega_\epsilon}^{2})}.\\ \end{split} \end{equation} So there exist positive constants $\dot{C}_{2}$ and $\overline{C}_{2}$ depending only on $\overline{C}_{0}$ and $\delta ^{-1}$, such that \begin{equation} \zeta (x_{0}, t_{0})\leq \dot{C}_{2}, \end{equation} and \begin{equation} \Xi_{\epsilon , 0} (x, t)\leq \overline{C}_{2} \end{equation} for all $(x, t)\in \tilde{M}\setminus B_{\omega_1}(\frac{7}{4}\delta)\times [0, T]$.
Furthermore, we choose two suitable cut-off functions $\rho_{1}$, $\rho_{2}$, a suitable constant $A$ which depends only on $\overline{C}_{0}$ and $\delta ^{-1}$, and a test function \begin{equation} \zeta_{1}(x, t)=\rho_{1}^{2}\Xi_{\epsilon , 1 }+ A \rho_{2}^{2}\Xi_{\epsilon , 0}. \end{equation} Running a similar argument as above, we can show that there exist constants $\overline{C}_{3}$ and $\dot{C}_{3}$ depending only on $\overline{C}_{0}$ and $\delta ^{-1}$ such that \begin{equation} \Xi_{\epsilon , 1} (x, t)\leq \overline{C}_{3}, \end{equation} and \begin{equation}\label{c201}
|\nabla_{\hat{H}}F_{H_{\epsilon}(t)}|_{\hat{H}, \omega_{\epsilon}}^{2}\leq \dot{C}_{3} \end{equation} for all $(x, t)\in \tilde{M}\setminus B_{\omega_1}(\frac{15}{8}\delta)\times [0, T]$.
Recalling the equality
\begin{equation}
\overline{\partial }\partial _{\hat{H}}h_{\epsilon}(t)=h_{\epsilon}(t)(F_{H_{\epsilon}(t)}-F_{\hat{H}})+\overline{\partial }h_{\epsilon}(t)\wedge (h_{\epsilon}(t))^{-1}\partial _{\hat{H}}h_{\epsilon}(t)
\end{equation}
and noting that K\"ahler metrics $\omega_{\epsilon}$ are uniform locally quasi-isometry to $\pi^{\ast }\omega $ outside the exceptional divisor $\tilde{\Sigma}$, by standard elliptic estimates, because we have local uniform bounds on $h_{\epsilon }$, $T_{\epsilon}$, $F_{H_{\epsilon}}$ and $ F_{\hat{H}}$, we get a uniform $C^{1, \alpha }$-estimate of $h_{\epsilon }$ on $\tilde{M}\setminus B_{\omega_1}(\frac{61}{32}\delta)\times [0, T]$.
We can iterate this procedure by induction and then obtain local uniform bounds for $\Xi_{\epsilon , k}$, $|\nabla_{\hat{H}}^{k}F_{H_{\epsilon}(t)}|_{\hat{H}, \omega_\epsilon}^{2}$, and $\|h_{\epsilon}\|_{C^{k+1,\alpha}}$ on $\tilde{M}\setminus B_{\omega_1}(2\delta)\times [0, T]$ for any $k\geq 1$.
$\Box$ \\
From the above local uniform $C^{\infty}$-bounds on $H_{\epsilon}$, we get the following Lemma.
\begin{lemma}\label{lem 2.6} By choosing a subsequence, $H_{\epsilon}( t)$ converges to $H(x, t)$ locally in $C^\infty$ topological on $\tilde{M}\setminus \tilde{\Sigma }\times [0, \infty)$ as $\epsilon\rightarrow 0$ and $H(t)$ satisfies (\ref{SSS1}). \end{lemma}
\section{Uniform estimate of the Higgs field } \setcounter{equation}{0}
In this section, we prove that the norm $|\phi |_{H(t), \omega}$ is uniformly bounded along the heat flow (\ref{SSS1}) for $t\geq t_{0}>0$.
Firstly, we know $|\phi|_{\hat{H}, \omega_\epsilon}^2 \in L^{1}(\tilde{M}, \omega_\epsilon )$ and the $L^{1}$-norm is uniformly bounded. In fact, \begin{eqnarray} \begin{array}{lll}
\int_{\tilde{M}}|\phi|_{\hat{H}, \omega_\epsilon}^2\frac{\omega_\epsilon^n}{n!}&=\int_{\tilde{M}}\mbox{\rm tr\,}(\sqrt{-1}\Lambda_{\omega_\epsilon}(\phi\wedge\phi^{\ast\hat{H}}))\frac{\omega_\epsilon^n}{n!}\\ &=\int_{\tilde{M}}\mbox{\rm tr\,} (\phi\wedge\phi^{\ast\hat{H}})\wedge\frac{\omega_\epsilon^{n-1}}{(n-1)!}\leq \check{C}_{\phi}< \infty, \end{array} \end{eqnarray}
where $\check{C}_{\phi}$ is a positive constant independent of $\epsilon$. Moreover, we will show the $L^{1+2a}$-norm of $|\phi|_{\hat{H}, \omega_\epsilon}^2$ is also uniformly bounded, for any $0\leq 2a< \frac{1}{2}$. Let's recall Lemma 5.5 in \cite{Sib} (see also Lemma 5.8 in \cite{LZ3}).
\begin{lemma}\label{lem 3.1}{\bf(\cite{Sib})} Let $(M, \omega)$ be a compact K\"ahler manifold of complex dimension $n$, and $\pi: \tilde{M}\rightarrow M$ be a blow-up along a smooth complex sub-manifold $\Sigma$ of complex codimension $k$ where $k\geq 2$. Let $\eta$ be a K\"ahler metric on $\tilde{M}$, and consider the family of K\"ahler metric $\omega_\epsilon=\pi^\ast\omega+\epsilon\eta$. Then for any $0\leq 2a <\frac{1}{k-1}$, we have $\frac{\eta^n}{\omega_\epsilon^n}\in L^{2a}(\tilde{M}, \eta)$, and the $L^{2a}(\tilde{M}, \eta)$-norm of $\frac{\eta^n}{\omega_\epsilon^n}$ is uniformly bounded independent of $\epsilon$, i.e. there is a positive constant $C^\ast$ such that \begin{equation}\label{Laa01} \int_{\tilde{M}}(\frac{\eta^n}{\omega_\epsilon^n})^{2a}\frac{\eta^n}{n!}\leq C^\ast \end{equation} for all $0<\epsilon \leq 1$. \end{lemma}
Since $\phi \in \Omega^{1,0 }(\mathrm{End} (E))$ is a smooth section and $\omega_\epsilon=\pi^\ast\omega+\epsilon\eta$, there exists a uniform constant $\tilde{C}_{\phi }$ such that \begin{equation}
\Big(\frac{|\phi|_{\hat{H}, \omega_\epsilon}^2\frac{\omega_\epsilon^n}{n!}}{\frac{\eta^n}{n!}}\Big)=\frac{n\mbox{\rm tr\,} (\phi\wedge\phi^{\ast\hat{H}})\wedge\omega_\epsilon^{n-1}}{\eta^n}\leq \tilde{C}_{\phi } \end{equation} for all $0<\epsilon \leq 1$.
By (\ref{Laa01}), for any $0\leq 2a <\frac{1}{2}$, there exists a uniform constant $C_{\phi }$ such that \begin{equation}\label{3.9} \begin{split}
&\int_{\tilde{M}}|\phi|_{\hat{H}, \omega_\epsilon}^{2(1+2a)}\frac{\omega_\epsilon^n}{n!}\\
=&\int_{\tilde{M}}\Big(\frac{|\phi|_{\hat{H}, \omega_\epsilon}^2\frac{\omega_\epsilon^n}{n!}}{\frac{\eta^n}{n!}}\Big)^{1+2a}\Big(\frac{\eta^n}{\omega_\epsilon^n}\Big)^{1+2a}\frac{\omega_\epsilon^n}{n!}\\
=&\int_{\tilde{M}}\Big(\frac{|\phi|_{\hat{H}, \omega_\epsilon}^2\frac{\omega_\epsilon^n}{n!}}{\frac{\eta^n}{n!}}\Big)^{1+2a}\Big(\frac{\eta^n}{\omega_\epsilon^n}\Big)^{2a}\frac{\eta^n}{n!}\\ \leq &C_\phi \end{split} \end{equation} for all $0<\epsilon \leq 1$. By limiting (\ref{3.9}), we have the following lemma.
\begin{lemma}\label{lem 3.2}For any $0\leq 2a <\frac{1}{2}$, we have $|\phi|_{\hat{H}, \omega }^{2}\in L^{1+2a}(M\setminus \Sigma , \omega )$, i.e. there exists a constant $C_{\phi }$ such that \begin{equation}\label{Lp01}
\int_{M\setminus \Sigma }|\phi|_{\hat{H}, \omega }^{2(1+2a)}\frac{\omega ^n}{n!}\leq C_\phi . \end{equation} \end{lemma}
On $M\setminus\Sigma$, we get ((2.5) in \cite{LZ1} for details) \begin{equation}\label{eqn:1}
(\Delta-\frac{\partial}{\partial t})|\phi|_{H(t), \omega}^2\geq 2|\nabla_{H(t)}\phi|_{H(t), \omega}^2+2|\sqrt{-1}\Lambda_\omega[\phi, \phi^{\ast H(t)}]|_{H(t)}^2-2| Ric_\omega |_{\omega} |\phi|_{H(t), \omega}^2. \end{equation} By a direct computation, we have \begin{equation} \begin{aligned}
(\Delta-\frac{\partial}{\partial t})\log(|\phi|_{H(t), \omega}^2+e)=&\ \frac{1}{\log(|\phi|_{H(t), \omega}^2+e)}(\Delta-\frac{\partial}{\partial t})|\phi|_{H(t), \omega}^2-\frac{\nabla|\phi|_{H(t), \omega}^2\cdot\nabla|\phi|_{H(t), \omega}^2}{(|\phi|_{H(t), \omega}^2+e)^2}\\
\geq&\ \frac{1}{\log(|\phi|_{H(t), \omega}^2+e)}(\Delta-\frac{\partial}{\partial t})|\phi|_{H(t), \omega}^2-\frac{2|\nabla_{H(t)}^{1,0}\phi|_{H(t), \omega}^2\cdot|\phi|_{H(t), \omega}^2}{(|\phi|_{H(t), \omega}^2+e)^2}. \end{aligned} \end{equation} Combining this with \eqref{eqn:1}, we obtain \begin{equation}\label{eqn:3}
(\Delta-\frac{\partial}{\partial t})\log(|\phi|_{H(t), \omega}^2+e)\geq\frac{2|\Lambda_\omega[\phi, \phi^{\ast H(t)}]|_{H(t)}^2}{|\phi|_{H(t), \omega}^2+e}-2|Ric_\omega|_{\omega} \end{equation} on $M\setminus\Sigma$. Based on Lemma 2.7 in \cite{Si2}, we obtain \begin{equation}\label{eqn:2}
|\sqrt{-1}\Lambda_\omega[\phi, \phi^{\ast H(t)}]|_{H(t)}=|[\phi, \phi^{\ast H(t)}]|_{H(t), \omega}\geq a_1|\phi|_{H(t), \omega}^2-a_2(|\phi|_{\hat{H}, \omega}^2+1), \end{equation} where $a_1$ and $a_2$ are positive constants depending only on $r$ and $n$. Then, for any $0\leq 2a < \frac{1}{2}$, we have \begin{equation} \begin{split}
& 2|\Lambda_\omega[\phi, \phi^{\ast H(t)}]|_{H(t)}^2\\
\geq\ &(|\Lambda_\omega[\phi, \phi^{\ast H(t)}]|_{H(t)}+e)^2-6e^2\\
\geq\ &(|\Lambda_\omega[\phi, \phi^{\ast H(t)}]|_{H(t)}+e)^{1+\frac{a}{2}}-6e^2\\
\geq\ & a_{3}(|\phi|_{H(t), \omega }^2+e)^{1+\frac{a}{2}}-a_{4}|\phi|_{\hat{H}, \omega}^{2+a}-a_{5}, \end{split} \end{equation} where $a_{3}$, $a_{4}$ and $a_{5}$ are positive constants depending only on $a$, $r$ and $n$. Then it is clear that (\ref{eqn:3}) implies: \begin{equation}
(\Delta-\frac{\partial}{\partial t})\log(|\phi|_{H(t), \omega }^2+e)\geq a_3(|\phi|_{H(t), \omega}^2+e)^{\frac{a}{2}}-a_4|\phi|_{\hat{H}, \omega}^{2+a}-a_5-2|Ric_\omega|_{\omega}, \end{equation} on $M\setminus\Sigma$.
In the following, we denote: \begin{equation}
f=\log(|\phi|_{H(t), \omega}^2+e). \end{equation} For any $b> 1$, we have: \begin{equation} \begin{aligned}
(\Delta-\frac{\partial}{\partial t})f^b=&\ bf^{b-1}(\Delta-\frac{\partial}{\partial t})f+b(b-1)|\nabla f|_{\omega}^2f^{b-2}\\
\geq&\ a_3bf^{b-1}(|\phi|_{H(t), \omega}^2+e)^{\frac{a}{2}}-a_4bf^{b-1}|\phi|_{\hat{H}, \omega}^{2+a}-(a_5+2|Ric_\omega|_{\omega})bf^{b-1}\\
& +b(b-1)|\nabla f|_{\omega}^2f^{b-2}. \end{aligned} \end{equation}
Choosing a cut-off function $\varphi_{\delta }$ with \begin{equation} \varphi_\delta(x)=\left\{ \begin{array}{ll} 1, & x\in M\setminus B_{2\delta}(\Sigma),\\ 0, & x\in B_{\delta}(\Sigma), \end{array} \right. \end{equation}
where $B_{\delta}=\{x\in M|d_{\omega}(x, \Sigma )<\delta \}$, and integrating by parts, we have \begin{equation} \begin{split}
&-\frac{\partial}{\partial t}\int_M\varphi_\delta^4f^b\frac{\omega^n}{n!}=\int_M\varphi_\delta^4(\Delta-\frac{\partial}{\partial t})f^b\frac{\omega^n}{n!}+\int_M4\varphi_\delta^3\nabla\varphi_\delta\nabla f^b\frac{\omega^n}{n!}\\
\geq &\int_Ma_3b\varphi_\delta^4f^{b-1}(|\phi|_{H(t), \omega }^2+e)^{\frac{a}{2}}\frac{\omega^n}{n!}-\int_Ma_4b\varphi_\delta^4f^{b-1}
|\phi|_{\hat{H}, \omega}^{2+a}\frac{\omega^n}{n!}\\
& -\int_M(a_5+2|Ric_\omega|_{\omega})b\varphi_\delta^4f^{b-1}\frac{\omega^n}{n!}+\int_Mb(b-1)\varphi_\delta^4
|\nabla f|_{\omega}^2f^{b-2}\frac{\omega^n}{n!}\\
& -\int_M4b\varphi_\delta^3|\nabla\varphi_\delta|_{\omega}\cdot|\nabla f|_{\omega}f^{b-1}\frac{\omega^n}{n!}\\
\geq&\int_Ma_3b\varphi_\delta^4f^{b-1}(|\phi|_{H(t), \omega }^2+e)^{\frac{a}{2}}\frac{\omega^n}{n!}-\int_Ma_4b\varphi_\delta^4f^{b-1}
(|\phi|_{\hat{H}, \omega}^{2})^{1+\frac{a}{2}}\frac{\omega^n}{n!}\\
& -\int_M(a_5+2|Ric_\omega|_{\omega})b\varphi_\delta^4f^{b-1}\frac{\omega^n}{n!}
-\int_M\frac{4b}{b-1}\varphi_\delta^2|\nabla\varphi_\delta|_{\omega}^2f^b\frac{\omega^n}{n!}\\
\geq &\int_Ma_3b\varphi_\delta^4f^{b-1}f^{(b-1)B}\frac{(|\phi|_{H(t), \omega}^2+e)^{\frac{a}{2}}}{f^{(b-1)B}}\frac{\omega^n}{n!}\\
& -a_4b\Big(\int_M(\varphi_\delta^3f^{b-1})^p\frac{\omega^n}{n!}\Big)^{\frac{1}{p}}\Big(\int_M\varphi_\delta^q(|\phi|_{\hat{H}, \omega }^2)^{1+2a} \frac{\omega^n}{n!}\Big)^{\frac{1}{q}}\\
& -\int_M(a_5+2|Ric_\omega|_{\omega})b\varphi_\delta^4f^{b-1}\frac{\omega^n}{n!}\\
& -\frac{4b}{b-1}\Big(\int_M\varphi_\delta^4f^{2b}\frac{\omega^n}{n!}\Big)^\frac{1}{2}
\Big(\int_M|\nabla\varphi_\delta|_{\omega}^4\frac{\omega^n}{n!}\Big)^{\frac{1}{2}},\\ \end{split} \end{equation} where $q=\frac{2(1+2a)}{2+a}$, $p=\frac{2(1+2a)}{3a}$ and $B=\frac{2(1+2a)}{3a}+\frac{2b}{b-1}$. We can see that there exists a constant $C(a,b)$ depending only on $a$ and $b$ such that \begin{equation}
\frac{(|\phi|_{H(t), \omega}^2+e)^{\frac{a}{2}}}{(\log(|\phi|_{H(t), \omega}^2+e))^{(b-1)B}}\geq C(a,b). \end{equation} Since the complex codimension of $\Sigma$ is at least $3$, we can choose the cut-off function $\varphi_\delta$ such that \begin{equation}\label{cut}
\int_M|\nabla\varphi_\delta|_{\omega}^4 \frac{\omega^n}{n!} \sim O(\delta^{-4}\delta^6)=O(\delta^2). \end{equation} By (\ref{Lp01}), we obtain \begin{equation}\label{Lp04} \begin{split} -\frac{\partial}{\partial t}\int_M\varphi_\delta^4f^b\frac{\omega^n}{n!}\geq& \ a_6\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}-a_7\Big(\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}\Big)^{\frac{1}{B}}\\ &-a_8\Big(\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}\Big)^{\frac{1}{B}} -a_{9}\Big(\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}\Big)^{\frac{b}{(b-1)B}},\\ \end{split} \end{equation}
where $a_i$ are positive constants depending only on $r, n, a, b, |Ric_\omega|_{\omega }, \mathrm{Vol}(M, \omega )$ and $C_{\phi}$ for $i=6, 7, 8, 9.$
\begin{lemma}\label{lem 3.3}
For any $b>1$, there exists a constant $\hat{C}_{b}$ depending only on $r, n, b, |Ric_\omega|_{\omega}, \mathrm{Vol}(M, \omega )$ and $C_{\phi}$ such that \begin{equation}\label{Lp02}
\int_{M\setminus \Sigma }(\log(|\phi|_{H(t), \omega }^2+e))^{b}\frac{\omega^n}{n!}\leq \hat{C}_{b} \end{equation} for all $t\geq 0$. \end{lemma}
{\bf Proof. } Suppose that $\int_M\varphi_\delta^4f^b\frac{\omega^n}{n!}(t^{\ast})=\max_{t\in [0, T]}\int_M\varphi_\delta^4f^b\frac{\omega^n}{n!}(t)$ with $t^{\ast }>0$. Choosing $a=\frac{1}{8}$ in (\ref{Lp04}), at point $t^{\ast}$, we have \begin{equation}\label{Lp04} \begin{split}
0\geq &-\frac{\partial}{\partial t}|_{t=t^{\ast}}\int_M\varphi_\delta^4f^b\frac{\omega^n}{n!}\\ \geq&\ a_6\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}-a_7\Big(\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}\Big)^{\frac{1}{B}}\\ &-a_8\Big(\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}\Big)^{\frac{1}{B}} -a_{9}\Big(\int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}\Big)^{\frac{b}{(b-1)B}}.\\ \end{split} \end{equation}
This inequality implies that there exists a constant $\tilde{C}_{b}$ depending only on $r, n, b, |Ric_\omega|_{\omega}, \mathrm{Vol}(M, \omega )$ and $C_{\phi}$ such that \begin{equation} \int_M\varphi_\delta^4f^{(b-1)B}\frac{\omega^n}{n!}(t^{\ast})\leq \tilde{C}_{b}. \end{equation} So we have \begin{equation}
\max_{t\in [0, T]}\int_M\varphi_\delta^4f^b\frac{\omega^n}{n!}(t)\leq \tilde{C}_{b}+\int_M(\log(|\phi|_{\hat{H}, \omega}^2+e))^b\frac{\omega^n}{n!}. \end{equation} Noting that the last term in the above inequality is also bounded, and letting $\delta \rightarrow 0$, we obtain the estimate (\ref{Lp02}) .
$\Box$ \\
By the heat equation (\ref{SSS1}), we have \begin{equation}
|\frac{\partial}{\partial t}\log(|\phi|_{H(t), \omega}^2+e)|=\Big|\frac{\frac{\partial}{\partial t}|\phi|_{H(t), \omega}^2}{|\phi|_{H(t), \omega}^2+e}\Big|=\Big|\frac{2\langle[\Phi (H(t), \omega), \phi], \phi\rangle_{H(t)}}{|\phi|_{H(t), \omega}^2+e}\Big|\leq 2|\Phi (H(t), \omega)|_{H(t)}, \end{equation} then \begin{equation}
\Delta(\log(|\phi|_{H(t), \omega}^2+e))\geq -2|Ric_\omega|_{\omega}-2|\Phi (H(t), \omega)|_{H(t)}. \end{equation}
By (\ref{H005}), we have
\begin{equation}\label{H00005}
\max_{x\in M\setminus \Sigma }|\Phi (H(t), \omega) |_{H(t)}(x) \leq C_{K}(\tau )\hat{C}_{1} (t^{-n}+1). \end{equation}
So there exists a positive constant $C^{\ast}(t_{0}^{-1})$ depending only on $t_{0}^{-1}$ and $|Ric_\omega|_{\omega}$ such that \begin{equation}\label{e01}
\Delta(\log(|\phi|_{H(t), \omega}^2+e))\geq -C^{\ast}(t_{0}^{-1}) \end{equation} on $M\setminus \Sigma $, for $t\geq t_{0}>0$. Then, we have \begin{equation} \begin{split} -C^\ast(t_{0}^{-1}) \int_M\varphi_\delta^2f\frac{\omega^n}{n!}&\leq \int_M\varphi_\delta^2f\Delta f\frac{\omega^n}{n!}\\ &=\int_M div(\varphi_\delta^2f\nabla f)\frac{\omega^n}{n!}-\int_M\nabla(\varphi_\delta^2f)\cdot\nabla f\frac{\omega^n}{n!}\\
&=-\int_M|\nabla(\varphi_\delta f)|_{\omega}^2\frac{\omega^n}{n!}+\int_M|\nabla\varphi_\delta|_{\omega}^2 f^2\frac{\omega^n}{n!} \end{split} \end{equation} for $t\geq t_{0}>0$. By (\ref{cut}) and (\ref{Lp02}), we obtain \begin{equation} \begin{split}
& \int_{M\setminus\Sigma}|\nabla f|_{\omega}^2\frac{\omega^n}{n!}=\lim_{\delta \rightarrow 0}\int_{M\setminus B_{2\delta}(\Sigma)}|\nabla f|_{\omega}^2\frac{\omega^n}{n!}\\
\leq &\lim_{\delta \rightarrow 0}\int_M|\nabla(\varphi_\delta f)|_{\omega}^2\frac{\omega^n}{n!}\\
\leq & \lim_{\delta \rightarrow 0} \int_M C^\ast(t_{0}^{-1})\varphi_\delta^2f+|\nabla\varphi_\delta|_{\omega}^2f^2\frac{\omega^n}{n!}\\ \leq & C^\ast(t_{0}^{-1})\cdot \hat{C}_{b}\\ \end{split} \end{equation} for $t\geq t_{0}>0$. This implies
$f\in W^{1,2}(M, \omega)$ and $f$ satisfies the elliptic inequality $\Delta f\geq -C^\ast(t_{0}^{-1})$ globally on $M$ in weakly sense for $t\geq t_{0}>0$. By the standard elliptic estimate (see Theorem 8.17 in \cite{GT}), we can show that $f\in L^{\infty}(M)$ for all $t\geq t_0>0$, and the $L^{\infty}$-norm depending on $C^\ast(t_{0}^{-1})$, the $L^{b}$-norm (i.e. $\hat{C}_{b}$) and the geometry of $(M, \omega)$, i.e. we have the following proposition.
\begin{proposition}\label{prop 3.4} Along the heat flow (\ref{SSS1}), there exists a positive constant $\hat{C}_{\phi}$ depending only on $r, n, t_{0}^{-1}, C_{\phi}$ and the geometry of $(M, \omega)$ such that \begin{equation}
\sup_{M\setminus \Sigma }|\phi|^2_{H(t), \omega }\leq \hat{C}_{\phi} \end{equation} for all $t\geq t_{0}>0$. \end{proposition}
Recalling the Chern-Weil formula in \cite{Si} (Proposition 3.4) and using Fatou's lemma, we have \begin{equation}\label{CW22} \begin{split} &4\pi^{2}\int_{M} (2c_{2}(\mathcal{E})-c_{1}(\mathcal{E})\wedge c_{1}(\mathcal{E}))\wedge\frac{\omega^{n-2}}{(n-2)!}\\ =&\lim_{\epsilon \rightarrow 0}4\pi^{2}\int_{\tilde{M}} (2c_{2}(E)-c_{1}(E)\wedge c_{1}(E))\wedge\frac{\omega_{\epsilon}^{n-2}}{(n-2)!}\\ =&\lim_{\epsilon \rightarrow 0}\int_{\tilde{M}}\mbox{\rm tr\,} (F_{H_{\epsilon}(t), \phi }\wedge F_{H_{\epsilon}(t), \phi })\wedge \frac{\omega_{\epsilon }^{n-2}}{(n-2)!}\\
=&\lim_{\epsilon \rightarrow 0}\int_{\tilde{M}}(|F_{H_{\epsilon}(t), \phi }|_{H_{\epsilon }(t), \omega_\epsilon}^{2}-|\Lambda_{\omega_{\epsilon}} F_{H_{\epsilon}(t), \phi }|_{H_{\epsilon }(t)}^{2}) \frac{\omega_{\epsilon}^{n}}{n!}\\
\geq& \int_{M\setminus \Sigma}(|F_{H(t), \phi }|_{H(t), \omega}^{2}-|\sqrt{-1}\Lambda_{\omega} F_{H(t), \phi}|_{H(t)}^{2} )\frac{\omega^{n}}{n!}\\ \end{split} \end{equation}
for $t>0$. Here, over a non-projective compact complex manifold, the Chern classes of a coherent sheaf can be defined by the classes of Atiyah-Hirzenbruch (\cite{AH}, see \cite{Gr} for details). The $L^{\infty}$ estimate of $|\phi|^2_{H(t), \omega }$, (\ref{H005}) and the above inequality imply that $|F_{H(t)}|_{H(t), \omega}$ is square integrable and $|\Lambda_{\omega } F_{H(t)}|_{H(t)}$ is uniformly bounded, i.e. we have the following corollary.
\begin{corollary}\label{coro 3.5} Let $H(t)$ be a solution of the heat flow (\ref{SSS1}), then $H(t)$ must be an admissible Hermitian metric on $\mathcal{E}$ for every $t>0$. \end{corollary}
\section{Approximate Hermitian-Einstein structure } \setcounter{equation}{0}
Let $H_{\epsilon}(t)$ be the long time solution of (\ref{DDD1}) and $H(t)$ be the long time solution of (\ref{SSS1}). We set: \begin{equation} \exp{S(t)}=h(t)=\hat{H}^{-1}H(t), \end{equation} \begin{equation} \exp{S(t_{1}, t_{2})}=h(t_{1}, t_{2})=H^{-1}(t_{1})H(t_{2}), \end{equation} \begin{equation} \exp{S_{\epsilon}(t_{1}, t_{2})}=h_{\epsilon}(t_{1}, t_{2})=H_{\epsilon}^{-1}(t_{1})H_{\epsilon}(t_{2}). \end{equation} By Lemma 3.1 in \cite{Si}, we have \begin{equation}\label{la02}
\Delta_{\omega_{\epsilon }}\log(\mbox{\rm tr\,} h+\mbox{\rm tr\,} h^{-1})\geq -2|\Lambda_{\omega_{\epsilon} }(F_{H, \phi })|_{H}-2|\Lambda_{\omega }(F_{K, \phi })|_{K}, \end{equation} where $\exp{S}=h=K^{-1}H$. By the uniform lower bound of Green functions $G_{\epsilon}$ (\ref{H005}) and the inequalities (\ref{c02}) , we have \begin{equation}\label{for4.5}
\|S_{\epsilon }(t_{1}, t_{2})\|_{L^{\infty}(\tilde{M}) }\leq C_{1}\|S_{\epsilon }(t_{1}, t_{2})\|_{L^{1}(\tilde{M}, \omega_{\epsilon })}+C_{2}(t_{0}^{-1}) \end{equation} for $0<t_{0}\leq t_{1} \leq t_{2}$, where $C_{1}$ is a constant depending only on the rank $r$ and $C_{2}(t_{0}^{-1})$ is a constant depending only on $C_{K}$, $C_{G}$ and $t_{0}^{-1}$. By limiting, we also have \begin{equation}\label{mean1}
\|S(t_{1}, t_{2})\|_{L^{\infty}( M\setminus \Sigma )}\leq C_{1}\|S(t_{1}, t_{2})\|_{L^{1}( M\setminus \Sigma, \omega )}+C_{2}(t_{0}^{-1}) \end{equation} for $0<t_{0}\leq t_{1} \leq t_{2}$. On the other hand, (\ref{c01}) and (\ref{c02}) imply that \begin{equation}\label{L101} \begin{split}
& r^{-\frac{1}{2}}\|S(t_{1}, t_{2})\|_{L^1 ( M\setminus \Sigma, \omega )}-\mathrm{Vol}(M, \omega )\log(2r)\\ \leq& \int_{t_{1}}^{t_{2}}\int_{M\setminus \Sigma }|\sqrt{-1} \Lambda_{\omega} F_{H(s), \phi }-\lambda\mathrm{Id}_{\mathcal{E}}|_{H(s)}\frac{\omega^{n}}{n!} ds\\ \leq&\ \hat{C}_{1}(t_2-t_1). \\ \end{split} \end{equation}
So, we know that the metrics $H(t_{1})$ and $H(t_{2})$ are mutually bounded each other on $\mathcal{E}|_{M\setminus \Sigma}$. $(\mathcal{E}|_{M\setminus \Sigma}, \phi )$ can be seen as a Higgs bundle on the non-compact K\"ahler manifold $(M\setminus \Sigma , \omega)$. Let's recall Donaldson's functional defined on the space $\mathscr{P}_{0}$ of Hermitian metrics on the Higgs bundle $(\mathcal{E}|_{M\setminus \Sigma}, \phi )$ (see Section 5 in \cite{Si} for details), \begin{eqnarray}\label{7} \mu_{\omega} (K, H) = \int_{M\setminus \Sigma } \mbox{\rm tr\,} (S \sqrt{-1}\Lambda_{\omega }F_{K, \phi })+ \langle \Psi (S) (D''_{\phi} S) , D''_{\phi} S \rangle_{K}\frac{\omega^{n}}{n!}, \end{eqnarray}
where $\Psi (x, y)= (x-y)^{-2}(e^{y-x }-(y-x)-1)$, $\exp{S}=K^{-1}H$. Since we have known that $|\Lambda_{\omega } F_{H(t), \phi }|_{H(t)}$ is uniformly bounded for $t\geq t_{0}>0$, it is easy to see that $H(t)$ (for every $t>0$) belongs to the definition space $\mathscr{P}_{0}$. By Lemma 7.1 in \cite{Si}, we have a formula for the derivative with respect to $t$ of Donaldson's functional, \begin{eqnarray}\label{F5}
\frac{d}{dt}\mu (H(t_{1}), H(t)) = -2\int_{M\setminus \Sigma }|\Phi (H(t), \phi )|_{ H(t)}^{2}\frac{\omega^{n}}{n!}. \end{eqnarray}
\begin{proposition}\label{prop 4.1}
Let $H(t)$ be the long time solution of (\ref{SSS1}). If the reflexive Higgs sheaf $(\mathcal{E}, \phi )$ is $\omega$-semi-stable, then \begin{equation}\label{semi03}
\int_{M\setminus \Sigma}|\sqrt{-1}\Lambda_{\omega} F_{H(t), \phi}-\lambda \mathrm{Id}_{\mathcal{E}} |_{H(t)}^{2} \frac{\omega^{n}}{n!}\rightarrow 0, \end{equation} as $t\rightarrow +\infty$. \end{proposition}
{\bf Proof. } We prove (\ref{semi03}) by contradiction. If not, by the monotonicity of $\|\Lambda_\omega (F_{H(t), \phi})-\lambda\mathrm{Id}\|_{L^{2}}$, we can suppose that
\begin{equation}\lim_{t\rightarrow +\infty }\int_M|\sqrt{-1} \Lambda_\omega F_{H(t), \phi}-\lambda\mathrm{Id}_{\mathcal{E}}|_{H(t)}^{2}\frac{\omega^n}{n!} = C^\ast> 0.\end{equation} By (\ref{F5}), we have \begin{equation}\label{M05}
\mu_\omega(H(t_{0}), H(t))=-\int_{t_{0}}^t\int_{M\setminus \Sigma }|\Lambda_\omega F_{H(s), \phi }-\lambda\mathrm{Id}_{\mathcal{E}}|_{H(s)}^2\frac{\omega^n}{n!}ds\leq -C^\ast(t-t_{0}) \end{equation} for all $0<t_{0} \leq t$. Then it is clear that (\ref{L101}) implies \begin{equation}\label{semi01}
\liminf_{t\rightarrow +\infty}\frac{-\mu_{\omega } (H(t_{0}), H(t))}{\|S(t_{0}, t)\|_{L^{1}(M\setminus \Sigma , \omega )}}\geq r^{-\frac{1}{2}}\frac{C^\ast}{\hat{C}_{1}}. \end{equation}
By the definition of Donaldson's functional (\ref{7}), we must have a sequence $t_{i}\rightarrow +\infty$ such that \begin{equation}\label{CM02}
\|S(1, t_{i})\|_{L^{1}(M\setminus \Sigma , \omega)}\rightarrow +\infty. \end{equation} On the other hand, it is easy to check that \begin{equation}\label{s01}
|S(t_{1}, t_{3})|_{H(t_{1})}\leq r(|S(t_{1}, t_{2})|_{H(t_{1})}+|S(t_{2}, t_{3})|_{H(t_{2})}) \end{equation} for all $0\leq t_{1} , t_{2} , t_{3}$. Then, by (\ref{mean1}), we have \begin{equation}\label{CM03}
\lim_{i\rightarrow \infty }\|S(t_{0}, t_{i})\|_{L^{1}(M\setminus \Sigma , \omega)}\rightarrow +\infty , \end{equation}
and \begin{equation}\label{CM01} \begin{split}
&\|S(t_{0}, t)\|_{L^{\infty}(M\setminus \Sigma )}\leq r\|S(1, t)\|_{L^{\infty}(M\setminus \Sigma )}+r \|S(t_{0}, 1)\|_{L^{\infty}(M\setminus \Sigma )}\\
\leq &\ r^{2}C_{3}(\|S(t_{0}, t)\|_{L^{1}}+\|S(t_{0}, 1)\|_{L^{1}})+r\|S(t_{0}, 1)\|_{L^{\infty}(M\setminus \Sigma )}+rC_{4}\\
\end{split} \end{equation}
for all $0<t_{0}\leq t$, where $C_{3}$ and $C_{4}$ are uniform constants depending only on $r$, $C_{K}$ and $C_{G}$.
Set $u_{i}(t_{0})=\|S(t_{0}, t_{i})\|_{L^{1}}^{-1}S(t_{0}, t_{i})\in S_{H(t_{0})}(\mathcal{E}|_{M\setminus \Sigma })$, where $S_{H(t_{0})}(\mathcal{E}|_{M\setminus \Sigma })=\{\eta \in \Omega^{0}(M\setminus \Sigma , \mathrm{End}(\mathcal{E}|_{M\setminus \Sigma }))| \quad \eta ^{\ast H(t_{0})}=\eta \}$, then $\|u_{i}(t_{0})\|_{L^{1}}=1$. By (\ref{1}) and (\ref{for4.5}), we have \begin{equation} \int_{M\setminus\Sigma}\mbox{\rm tr\,} S(t_{0}, t_i) \frac{\omega^{n}}{n!}=0, \end{equation} so \begin{equation}\int_{M\setminus\Sigma}\mbox{\rm tr\,} u_{i}(t_{0}) \frac{\omega^{n}}{n!}=0.\end{equation}
By the inequalities (\ref{semi01}), (\ref{CM02}), (\ref{CM01}), and the Lemma 5.4 in \cite{Si}, we can see that, by choosing a subsequence which we also denote by $u_i(t_0)$, we have $u_{i}(t_{0})\rightarrow u_{\infty}(t_{0})$ weakly in $L_{1}^{2}$, where the limit $u_{\infty}(t_{0})$ satisfies: $\|u_{\infty}(t_{0})\|_{L^{1}}=1$, $\int_{M}\mbox{\rm tr\,}(u_{\infty}(t_{0}))\frac{\omega^{n} }{n!}=0$ and \begin{equation}\label{t01}
\|u_{\infty}(t_{0})\|_{L^{\infty}}\leq r^{2}C_{3}.
\end{equation} Furthermore, if $\Upsilon : R\times R \rightarrow R$ is a positive smooth function such that $\Upsilon (\lambda_{1}, \lambda_{2})< (\lambda_{1}- \lambda_{2})^{-1}$ whenever $\lambda_{1}>\lambda_{2}$, then
\begin{equation}\label{t02}
\begin{split}
&\int_{M\setminus \Sigma}\mbox{\rm tr\,} (u_{\infty}(t_{0})\sqrt{-1}\Lambda_{\omega }(F_{H(t_{0}), \phi })) + \langle \Upsilon (u_{\infty}(t_{0}))(\overline{\partial }_{\phi }u_{\infty}(t_{0})), \overline{\partial }_{\phi }u_{\infty}(t_{0}) \rangle_{H(t_{0})}\frac{\omega^{n} }{n!}\\&\leq -r^{-\frac{1}{2}}\frac{C^\ast}{\hat{C}_{1}}.\\
\end{split}
\end{equation}
Since $ \|u_{\infty}(t_{0})\|_{L^{\infty}}$ and $\|\Lambda_{\omega }(F_{H(t_{0}), \phi })\|_{L^{1}}$ are uniformly bounded (independent of $t_{0}$), (\ref{t02}) implies that: there exists a uniform constant $\check{C}$ independent of $t_{0}$ such that \begin{equation}\label{t03}
\int_{M\setminus \Sigma } |\overline{\partial }_{\phi}u_{\infty}(t_{0})|_{H(t_{0})}^{2}\frac{\omega^{n} }{n!}\leq \check{C}.
\end{equation}
From Lemma \ref{lem 2.2}, we see that $\hat{H}$ and $H(t_{0})$ are locally mutually bounded each other. By choosing a subsequence, we have $u_{\infty}(t_{0}) \rightarrow u_{\infty}$ weakly in local $L_{1}^{2}$ outside $\Sigma $ as $t_{0}\rightarrow 0$, where $u_{\infty}$ satisfies
\begin{equation}\int_{M}\mbox{\rm tr\,}(u_{\infty})\frac{\omega^{n} }{n!}=0, \quad and \quad \|u_{\infty}\|_{L^{1}}=1.\end{equation}
Since $|\sqrt{-1} \Lambda_{\omega_\epsilon}F_{H_\epsilon(t), \phi }|_{H_\epsilon(t)}\in L^{\infty}$ for $t>0$, by the uniform upper bound of the heat kernels (\ref{kernel01}), we have \begin{equation}\label{FL1} \begin{split}
&\int_{B_{\omega_1}(\delta)\setminus \Sigma}|\sqrt{-1} \Lambda_{\omega}F_{H(t), \phi }|_{H(t)}\frac{\omega^n}{n!}\\
=&\lim_{\epsilon\rightarrow 0}\int_{B_{\omega_1}(\delta)}|\sqrt{-1} \Lambda_{\omega_\epsilon}F_{H_\epsilon(t), \phi }|_{H_\epsilon(t)}\frac{\omega_\epsilon^n}{n!}\\
\leq &\lim_{\epsilon\rightarrow 0}\int_{B_{\omega_1}(\delta)}\int_{\tilde{M}}K_{\epsilon}(x, y, t)|\sqrt{-1} \Lambda_{\omega_\epsilon}F_{\hat{H}, \phi }|_{\hat{H}}(y)\frac{\omega_\epsilon^n(y)}{n!}\cdot\frac{\omega_\epsilon^n(x)}{n!}\\
=&\lim_{\epsilon\rightarrow 0}\int_{B_{\omega_1}(\delta)}\Big(\big(\int_{B_{\omega_1}(2\delta)}+\int_{\tilde{M}\setminus B_{\omega_1}(2\delta)}\big)K_{\epsilon}(x, y, t)|\sqrt{-1} \Lambda_{\omega_\epsilon}F_{\hat{H}, \phi}|_{\hat{H}}(y)\frac{\omega_\epsilon^n(y)}{n!}\Big)\frac{\omega_\epsilon^n(x)}{n!}\\
\leq &\lim_{\epsilon\rightarrow 0}\int_{\tilde{M}}\int_{B_{\omega_1}(2\delta)}K_{\epsilon}(x, y, t)|\sqrt{-1} \Lambda_{\omega_\epsilon}F_{\hat{H}, \phi}|_{\hat{H}}(y)\frac{\omega_\epsilon^n(y)}{n!}\cdot\frac{\omega_\epsilon^n(x)}{n!}\\
&+\int_{B_{\omega_1}(\delta)}\Big(\int_{\tilde{M}\setminus B_{\omega_1}(2\delta)}C_K(\tau)t^{-n}\exp\big(-\frac{d_{\omega_\epsilon}(x, y)}{(4+\tau)t}\big)|\sqrt{-1} \Lambda_{\omega_\epsilon}F_{\hat{H}, \phi}|_{\hat{H}}(y)\frac{\omega_\epsilon^n(y)}{n!}\Big)\frac{\omega_\epsilon^n(x)}{n!}\\
\leq &\int_{B_{\omega_1}(2\delta)\setminus \Sigma}|\sqrt{-1} \Lambda_{\omega}F_{\hat{H}, \phi}|_{\hat{H}}\frac{\omega^n}{n!}\\
&+C_K(\tau)t^{-n}\exp\big(-\frac{a(\delta)}{(4+\tau)t}\big)\mathrm{Vol}_{\omega_1}(B_{\omega_1}(\delta))\int_{M}|\sqrt{-1} \Lambda_{\omega}F_{\hat{H}, \phi}|_{\hat{H}}\frac{\omega^n}{n!}.\\ \end{split} \end{equation}
By (\ref{FL1}) and the uniform bound of $\|u_{\infty}(t_{0})\|_{L^{\infty}}$, we have \begin{equation}\label{t04}
\lim_{t_{0}\rightarrow 0 }\int_{M}\mbox{\rm tr\,} (u_{\infty}(t_{0})\sqrt{-1}\Lambda_{\omega }F_{H(t_{0}), \phi }) \frac{\omega^{n} }{n!}= \int_{M}\mbox{\rm tr\,} (u_{\infty}\sqrt{-1}\Lambda_{\omega }F_{\hat{H}, \phi}) \frac{\omega^{n} }{n!}.
\end{equation}
Let's denote \begin{eqnarray}\label{7.2}
S_{\hat{H}}(\mathcal{E}|_{M\setminus \Sigma })=\{\eta \in \Omega^{0}(M\setminus \Sigma , \mathrm{End}(\mathcal{E}|_{M\setminus \Sigma }))| \quad \eta ^{\ast \hat{H}}=\eta \}. \end{eqnarray} and
\begin{equation}\hat{u}_{\infty}(t_{0})=(h(t_{0}))^{\frac{1}{2}}\cdot u_{\infty}(t_{0}) \cdot (h(t_{0}))^{-\frac{1}{2}}.\end{equation} It is easy to check that: $\hat{u}_{\infty}(t_{0})\in S_{\hat{H}}(\mathcal{E}|_{M\setminus \Sigma })$ and $|\hat{u}_{\infty}(t_{0})|_{\hat{H}}=|u_{\infty}(t_{0})|_{H(t_{0})}$. Furthermore, we have:
\begin{lemma}\label{lem 4.2} For any compact domain $\Omega \subset M\setminus \Sigma$ and any positive smooth function $\Upsilon : R\times R \rightarrow R$, we have \begin{equation}\label{t05}
\lim_{t_{0}\rightarrow 0}\int_{\Omega} |\langle\Upsilon (u_{\infty}(t_{0}))(\overline{\partial }_{\phi}u_{\infty}(t_{0})), \overline{\partial }_{\phi}u_{\infty}(t_{0})\rangle_{H(t_{0})}-\langle\Upsilon (\hat{u}_{\infty}(t_{0}))(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})), \overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})\rangle_{\hat{H}}|\frac{\omega^{n} }{n!}=0. \end{equation} \end{lemma}
{\bf Proof. } At each point $x$ on $\Omega $, we choose a unitary basis $\{e_{i}\}_{i=1}^{r}$ with respect to the metric $H(t_{0})$, such that $u_{\infty}(t_{0}) (e_{i})= \lambda_{i} e_{i}$. Then, $\{\hat{e}_{i}=(h(t_{0}))^{\frac{1}{2}}e_{i}\}$ is a unitary basis with respect to the metric $\hat{H}$ and $\hat{u}_{\infty}(t_{0}) (\hat{e}_{i})= \lambda_{i} \hat{e}_{i}$. Set:
\begin{equation} \overline{\partial }_{\phi}u_{\infty}(t_{0})(e_{i})=(\overline{\partial }_{\phi}u_{\infty}(t_{0}))_{i}^{j}e_{j}, \quad \overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})(\hat{e}_{i})=(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0}))_{i}^{j}\hat{e}_{j}, \end{equation} then \begin{equation}
|\overline{\partial }_{\phi}u_{\infty}(t_{0})|_{H(t_{0}), \omega}^{2}=\sum_{i, j=1}^{r}\langle(\overline{\partial }_{\phi}u_{\infty}(t_{0}))_{i}^{j}, (\overline{\partial }_{\phi}u_{\infty}(t_{0}))_{i}^{j}\rangle_{\omega}, \end{equation} \begin{equation}\label{z1}
\langle\Upsilon (u_{\infty}(t_{0}))(\overline{\partial }_{\phi}u_{\infty}(t_{0})), \overline{\partial }_{\phi}u_{\infty}(t_{0})\rangle_{H(t_{0})} =\sum_{i, j=1}^{r}\langle\Upsilon (\lambda_{i}, \lambda_{j})(\overline{\partial }_{\phi}u_{\infty}(t_{0}))_{i}^{j}, (\overline{\partial }_{\phi}u_{\infty}(t_{0}))_{i}^{j}\rangle_{\omega}, \end{equation} \begin{equation} \Upsilon (\hat{u}_{\infty}(t_{0}))(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0}))(\hat{e}_{i})=\sum_{j=1}^{r}\Upsilon (\lambda_{i}, \lambda_{j})(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0}))_{i}^{j}\hat{e}_{j}, \end{equation} and \begin{equation}\label{z2}
\langle\Upsilon (\hat{u}_{\infty}(t_{0}))(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})), \overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})\rangle_{\hat{H}} =\sum_{i, j=1}^{r}\langle\Upsilon (\lambda_{i}, \lambda_{j})(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0}))_{i}^{j}, (\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0}))_{i}^{j}\rangle_{\omega}. \end{equation} By the definition, we have \begin{equation} \begin{split} \overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})=&\ (h(t_{0}))^{\frac{1}{2}}\circ \overline{\partial }_{\phi}u_{\infty}(t_{0}) \circ (h(t_{0}))^{-\frac{1}{2}} +\overline{\partial }_{\phi}(h(t_{0}))^{\frac{1}{2}} \circ u_{\infty}(t_{0}) \circ (h(t_{0}))^{-\frac{1}{2}}\\ & -(h(t_{0}))^{\frac{1}{2}} \circ u_{\infty}(t_{0}) \circ (h(t_{0}))^{-\frac{1}{2}}\circ \overline{\partial }_{\phi}(h(t_{0}))^{\frac{1}{2}}\circ (h(t_{0}))^{-\frac{1}{2}}\\ =&\ (h(t_{0}))^{\frac{1}{2}}\circ \overline{\partial }_{\phi}u_{\infty}(t_{0}) \circ (h(t_{0}))^{-\frac{1}{2}} +\overline{\partial }_{\phi}(h(t_{0}))^{\frac{1}{2}} \circ (h(t_{0}))^{-\frac{1}{2}} \hat{u}_{\infty}(t_{0})\\ & -\hat{u}_{\infty}(t_{0})\circ \overline{\partial }_{\phi}(h(t_{0}))^{\frac{1}{2}}\circ (h(t_{0}))^{-\frac{1}{2}},\\ \end{split} \end{equation} and \begin{equation}\label{z3} (\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0}))_{i}^{j} =(\overline{\partial }_{\phi}u_{\infty}(t_{0}))_{i}^{j} +(\lambda_{i}-\lambda_{j})\{\overline{\partial }_{\phi}(h(t_{0})^{\frac{1}{2}} \circ (h(t_{0}))^{-\frac{1}{2}}\}_{i}^{j}, \end{equation} where $\overline{\partial }_{\phi}(h(t_{0})^{\frac{1}{2}}\circ (h(t_{0})^{-\frac{1}{2}})(\hat{e}_{i})=(\overline{\partial }_{\phi}(h(t_{0})^{\frac{1}{2}} \circ (h(t_{0})^{-\frac{1}{2}})_{i}^{j}\hat{e}_{j}$. By (\ref{t01}), (\ref{z1}), (\ref{z2}) and (\ref{z3}), we have \begin{equation}\label{z4} \begin{split}
& |\langle\Upsilon (\hat{u}_{\infty}(t_{0}))(\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})), \overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})\rangle_{\hat{H}}-\langle\Upsilon (u_{\infty}(t_{0}))(\overline{\partial }_{\phi}u_{\infty}(t_{0})), \overline{\partial }_{\phi}u_{\infty}(t_{0})\rangle_{H(t_{0})}|\\
&\leq 8(r^{2}C_{3})^{2}(B^{\ast}(\Upsilon) ) (|\overline{\partial }_{\phi}u_{\infty}(t_{0})|_{H(t_{0})}|\overline{\partial }_{\phi}(h(t_{0})^{\frac{1}{2}} \circ (h(t_{0}))^{-\frac{1}{2}}|_{\hat{H}}+|\overline{\partial }_{\phi}(h(t_{0})^{\frac{1}{2}} \circ (h(t_{0}))^{-\frac{1}{2}}|_{\hat{H}}^{2}),\\ \end{split} \end{equation} where $B^{\ast}(\Upsilon)=\max_{[-r^{2}C_{3}, r^{2}C_{3}]^{2}}\Upsilon$. Since $H(t)$ are smooth on $M\setminus \Sigma \times [0, 1]$ and $h(t)\rightarrow \mathrm{Id}_{\mathcal{E}}$ locally in $C^{\infty}$-topology as $t\rightarrow 0$, it is easy to check that \begin{equation}\label{z5}
\sup_{x\in \Omega}(|(h(t_{0}))^{-\frac{1}{2}}\overline{\partial}_{\phi }(h(t_{0}))^{\frac{1}{2}}|_{\hat{H}, \omega}+|\overline{\partial}_{\phi }(h(t_{0}))^{\frac{1}{2}}(h(t_{0}))^{-\frac{1}{2}}|_{\hat{H}, \omega})\leq C_{\Omega}(t_{0}), \end{equation}
where $C_{\Omega}(t_{0})\rightarrow 0$ as $t_{0}\rightarrow 0$. On the other hand, $|\overline{\partial }_{\phi }u_{\infty}(t_{0})|_{H(t_{0}), \omega}$ are uniform bounded in $L^{2}$, so (\ref{z4}) and (\ref{z5}) imply (\ref{t05}).
$\Box$ \\
By (\ref{t02}), (\ref{t04}) and (\ref{t05}), we have that given any compact domain $\Omega \subset M\setminus \Sigma$ and any positive number $\tilde{\epsilon}>0$, \begin{equation}\label{t07}
\int_{M\setminus\Sigma}\mbox{\rm tr\,} (u_{\infty}\sqrt{-1}\Lambda_{\omega }F_{\hat{H}}, \phi )\frac{\omega^{n} }{n!}+\int_{\Omega} \langle\Upsilon (\hat{u}_{\infty}(t_{0}))(\overline{\partial }_{\phi }\hat{u}_{\infty}(t_{0})), \overline{\partial }_{\phi }\hat{u}_{\infty}(t_{0})\rangle_{\hat{H}}\frac{\omega^{n} }{n!}\leq -r^{-\frac{1}{2}}\frac{C^\ast}{\hat{C}_{1}}+\tilde{\epsilon}
\end{equation}
for small $t_{0}$. As we know that $\hat{u}_{\infty}(t_{0}) \rightarrow u_{\infty}$ in $L^{2}(\Omega )$, $|\hat{u}_{\infty}(t_{0})|_{\hat{H}}$ is uniformly bounded in $L^{\infty}$ and $|\overline{\partial }_{\phi}\hat{u}_{\infty}(t_{0})|_{\hat{H}, \omega}$ is uniformly bounded in $L^{2}(\Omega )$. By the same argument as that in Simpson's paper (Lemma 5.4 in \cite{Si}), we have \begin{equation}\label{t07}
\int_{M\setminus\Sigma}\mbox{\rm tr\,} (u_{\infty}\sqrt{-1}\Lambda_{\omega }F_{\hat{H}, \phi })\frac{\omega^{n} }{n!}+\|\Upsilon^{\frac{1}{2}} (u_{\infty})(\overline{\partial }_{\phi}u_{\infty})\|_{L^{q}(\Omega)}^{2}\leq -r^{-\frac{1}{2}}\frac{C^\ast}{\hat{C}_{1}}+2\tilde{\epsilon}
\end{equation} for any $q<2$ and any $\tilde{\epsilon}$. Since $\tilde{\epsilon}$, $q<2$ and $\Omega$ are arbitrary, we get \begin{equation}\label{semi02}
\int_{M\setminus\Sigma}\mbox{\rm tr\,} (u_{\infty}\sqrt{-1}\Lambda_{\omega }F_{\hat{H}, \phi}) + \langle\Upsilon (u_{\infty})(\overline{\partial }_{\phi}u_{\infty}), \overline{\partial }_{\phi}u_{\infty}\rangle_{\hat{H}}\frac{\omega^{n} }{n!}\leq -r^{-\frac{1}{2}}\frac{C^\ast}{\hat{C}_{1}}.
\end{equation}
By the above inequality and the Lemma 5.5 in \cite{Si}, we can see that the eigenvalues of $u_{\infty}$ are constant almost everywhere. Let $\lambda_{1} < \dots <\lambda _{l}$ denote the distinct eigenvalue of $u_{\infty}$. Since $\int_{M} \mbox{\rm tr\,} u_{\infty} \frac{\omega^{n} }{n!}=0$ and $\|u_{\infty}\|_{L^{1}}=1$, we must have $l\geq 2$. For any $1\leq \alpha <l $, define function $P_{\alpha } : R\rightarrow R$ such that \begin{eqnarray} P_{\alpha }=\left \{\begin{array}{cll} 1, & x\leq \lambda_{\alpha },\\ 0,
& x\geq \lambda_{\alpha +1}.\\ \end{array}\right. \end{eqnarray} Set $\pi_{\alpha }=P_{\alpha } (u_{\infty})$, Simpson (p887 in \cite{Si}) proved that:
(1) $\pi_{\alpha } \in L_{1}^{2}(M\setminus \Sigma , \omega , \hat{H})$;
(2) $\pi_{\alpha }^{2}=\pi_{\alpha }=\pi_{\alpha }^{\ast \hat{H}}$;
(3) $(\mathrm{Id}_{\mathcal{E}} -\pi_{\alpha }) \bar{\partial }\pi_{\alpha } =0$;
(4) $(\mathrm{Id}_{\mathcal{E}} -\pi_{\alpha }) [\phi , \pi_{\alpha }]=0$.
By Uhlenbeck and Yau's regularity statement of $L_{1}^{2}$-subbundle (\cite{UY}), $\pi_{\alpha }$ represent a saturated coherent Higgs sub-sheaf $E_{\alpha }$ of $(\mathcal{E}, \phi )$ on the open set $M\setminus \Sigma $. Since the singularity set $\Sigma $ is co-dimension at least $3$, by Siu's extension theorem (\cite{Siu1}), we know that $E_{\alpha }$ admits a coherent analytic extension $\tilde{E}_{\alpha }$. By Serre's result (\cite{Se}), we get the direct image $i_{\ast }E_{\alpha }$ under the inclusion $i: M\setminus \Sigma \rightarrow M$ is coherent. So, every $E_{\alpha }$ can be extended to the whole $M$ as a saturated coherent Higgs sub-sheaf of $(\mathcal{E}, \phi )$, which will also be denoted by $E_{\alpha }$ for simplicity. By the Chern-Weil formula (\ref{CW1}) (Proposition 4.1 in \cite{Br}) and the above condition (4), we have \begin{equation} \begin{split}
\deg_{\omega} (E_{\alpha })&=\int_{M\setminus \Sigma } \mbox{\rm tr\,} (\pi_{\alpha } \sqrt{-1}\Lambda _{\omega }F_{\hat{H} }) -|\overline{\partial } \pi_{\alpha }|_{\hat{H}, \omega}^{2} \frac{\omega^{n}}{n!}\\
&=\int_{M\setminus \Sigma } \mbox{\rm tr\,} (\pi_{\alpha } \sqrt{-1}\Lambda _{\omega }F_{\hat{H}, \phi }) -|D''_{\phi } \pi_{\alpha }|_{K, \omega}^{2} \frac{\omega^{n}}{n!}. \end{split} \end{equation} Set \begin{eqnarray} \nu =\lambda _{l} \deg_\omega (\mathcal{E}) -\sum_{\alpha =1} ^{l-1} (\lambda_{\alpha +1 } -\lambda_{\alpha }) \deg_\omega (E_{\alpha}). \end{eqnarray} Since $u_{\infty }=\lambda _{l} \mathrm{Id}_{\mathcal{E}} -\sum_{\alpha =1} ^{l-1} (\lambda_{\alpha +1} -\lambda_{\alpha })\pi_{\alpha }$ and $\int_{M\setminus \Sigma }\mbox{\rm tr\,} u_{\infty }\frac{\omega^{n}}{n!}=0$, we have \begin{eqnarray} \lambda _{l} \mathrm{rank} (\mathcal{E}) -\sum_{\alpha =1} ^{l-1}(\lambda_{\alpha +1} -\lambda_{\alpha }) \mathrm{rank} (E_{\alpha })=0, \end{eqnarray} then \begin{eqnarray}\label{3} \nu =\sum_{\alpha =1} ^{l-1} (\lambda_{\alpha +1} -\lambda_{\alpha }) \mathrm{rank} (E_{\alpha }) (\frac{\deg_\omega (\mathcal{E})}{\mathrm{rank} (\mathcal{E})}-\frac{\deg_\omega (E_{\alpha })}{\mathrm{rank} (E_{\alpha })}). \end{eqnarray}
By the argument similar to the one used in Simpson's paper (P888 in \cite{Si}) and the inequality (\ref{semi02}), we have \begin{equation} \begin{split} \nu =& \int_{M}\mbox{\rm tr\,} (u_{\infty }\sqrt{-1}\Lambda _{\omega }F_{\hat{H}, \phi })\\ & +\langle \sum_{\alpha =1} ^{l-1} (\lambda_{\alpha +1}-\lambda_{\alpha })(dP_{\alpha })^{2}(u_{\infty }) (D''_{\phi} u_{\infty}) , D''_{\phi} u_{\infty}\rangle _{\hat{H}}\\ \leq& -r^{-\frac{1}{2}}\frac{C^\ast}{\hat{C}_{1}}.\\ \end{split} \end{equation} On the other hand, (\ref{3}) and the semi-stability imply $\nu \geq 0$, so we get a contradiction.
$\Box$ \\
{\bf Proof of Theorem \ref{thm 1.1}}\ By (\ref{H00001}), we have \begin{equation}
\sup_{x\in M\setminus \Sigma}|\sqrt{-1} \Lambda_\omega (F_{H(t+1), \phi })-\lambda\mathrm{Id}_{\mathcal{E}}|_{H(t+1)}^2(x)\leq C_{K}\int_{M\setminus \Sigma}|\sqrt{-1} \Lambda_\omega (F_{H(t), \phi})-\lambda\mathrm{Id}_{\mathcal{E}}|_{H(t)}^2\frac{\omega^n}{n!}. \end{equation} If the reflexive Higgs sheaf $(\mathcal{E}, \phi )$ is $\omega $-semi-stable, (\ref{semi03}) implies \begin{equation}
\sup_{x\in M\setminus \Sigma}|\sqrt{-1} \Lambda_\omega (F_{H(t), \phi })-\lambda\mathrm{Id}_{\mathcal{E}}|_{H(t+1)}^2(x)\rightarrow 0, \end{equation} as $t\rightarrow +\infty$. By corollary \ref{coro 3.5}, we know that every $H(t)$ is an admissible Hermitian metric. Then we get an approximate Hermitian-Einstein structure on a semi-stable reflexive Higgs sheaf.
By choosing a subsequence $\epsilon \rightarrow 0$, we have $H_{\epsilon}(t)$ converge to $H(t)$ in local $C^{\infty}$-topology. Applying Fatou's lemma we obtain \begin{equation} \begin{split} &4\pi^{2}\int_{M} (2c_{2}(\mathcal{E})-\frac{r-1}{r}c_{1}(\mathcal{E})\wedge c_{1}(\mathcal{E}))\wedge\frac{\omega^{n-2}}{(n-2)!}\\ =&\lim_{\epsilon \rightarrow 0}4\pi^{2}\int_{\tilde{M}} (2c_{2}(E)-\frac{r-1}{r}c_{1}(E)\wedge c_{1}(E))\wedge\frac{\omega_{\epsilon}^{n-2}}{(n-2)!}\\ =&\lim_{\epsilon \rightarrow 0}\int_{\tilde{M}}\mbox{\rm tr\,} (F_{H_{\epsilon}(t), \phi }^{\bot}\wedge F_{H_{\epsilon}(t), \phi }^{\bot})\wedge \frac{\omega_{\epsilon }^{n-2}}{(n-2)!}\\
=&\lim_{\epsilon \rightarrow 0}\int_{\tilde{M}}|F_{H_{\epsilon}(t), \phi }^{\bot }|_{H_{\epsilon }(t), \omega_\epsilon}^{2}-|\Lambda_{\omega_{\epsilon}} F_{H_{\epsilon}(t), \phi }^{\bot}|_{H_{\epsilon }(t)}^{2} \frac{\omega_{\epsilon}^{n}}{n!}\\
\geq& \int_{M\setminus \Sigma}|F_{H(t), \phi }^{\bot }|_{H(t), \omega}^{2}\frac{\omega^{n}}{n!}\\ & -\int_{M\setminus \Sigma}|\sqrt{-1}\Lambda_{\omega} F_{H(t), \phi}-\lambda \mathrm{Id}_{\mathcal{E}} -\frac{1}{r} \mbox{\rm tr\,} (\sqrt{-1}\Lambda_{\omega} F_{H(t), \phi }-\lambda \mathrm{Id}_{\mathcal{E}})\mathrm{Id}_{\mathcal{E}}|_{H(t)}^{2} \frac{\omega^{n}}{n!}\\ \end{split} \end{equation} for $t>0$, where $F_{H, \phi }^{\bot}$ is the trace free part of $F_{H, \phi }$. Let $t\rightarrow +\infty$, then (\ref{semi03}) implies the following Bogomolov type inequality \begin{equation}\label{Bog} \int_{M} (2c_{2}(\mathcal{E})-\frac{r-1}{r}c_{1}(\mathcal{E})\wedge c_{1}(\mathcal{E}))\wedge\frac{\omega^{n-2}}{(n-2)!}\geq 0. \end{equation}
Now we prove that the existence of an approximate Hermitian-Einstein structure implies the semistability of $(\mathcal{E}, \phi )$. Let $s$ be a $\theta $-invariant holomorphic section of a reflexive Higgs sheaf $(\mathcal{G}, \theta )$ on a compact K\"ahler manifold $(M, \omega )$, i.e. there exists a holomorphic $1$-form $\eta $ on $M\setminus \Sigma_{\mathcal{G}}$ such that $\theta (s)=\eta \otimes s$, where $\Sigma_{\mathcal{G}}$ is the singularity set of $\mathcal{G}$. Given a Hermitian metric $H$ on $\mathcal{G}$, by computing, we have \begin{equation}\label{W01} \begin{split}
&\sqrt{-1}\Lambda_{\omega } \langle s, -[\theta , \theta^{\ast H}]s\rangle_{H}\\ = & -\sqrt{-1}\Lambda_{\omega } \langle\theta^{\ast H}s, \theta^{\ast H}s\rangle_{H}-\sqrt{-1}\Lambda_{\omega } \langle\theta s, \theta s\rangle_{H}\\
= & -\sqrt{-1}\Lambda_{\omega } \langle\theta^{\ast H}s- \langle\theta^{\ast H}s, s\rangle_{H}\frac{s}{|s|_{H}^{2}}, \theta^{\ast H}s - \langle\theta^{\ast H}s, s\rangle_{H}\frac{s}{|s|_{H}^{2}}\rangle_{H}\\
& -\sqrt{-1}\Lambda_{\omega } \langle \langle\theta^{\ast H}s, s\rangle_{H}\frac{s}{|s|_{H}^{2}}, \langle\theta^{\ast H}s, s\rangle_{H}\frac{s}{|s|_{H}^{2}}\rangle_{H}-\sqrt{-1}\Lambda_{\omega } \langle\phi s, \phi s\rangle_{H}\\
= &\ |\theta^{\ast H}s- \langle\theta^{\ast H}s, s\rangle_{H}\frac{s}{|s|_{H}^{2}}|_{H, \omega}^{2}\geq 0,\\ \end{split} \end{equation} where we have used $\theta (s)=\eta \otimes s$ in the third equality. Then, we have the following Weitzenb\"ock formula \begin{equation} \begin{split}
\frac{1}{2}\Delta_{\omega } |s|_{H}^{2}=&\ \sqrt{-1}\Lambda_{\omega }\partial \overline{\partial}|s|_{H}^{2}\\
= &\ |D_{H}^{1, 0}s|_{H, \omega}^{2}+\sqrt{-1}\Lambda_{\omega }\langle s, F_{H}s\rangle_{H} \\
= &\ |D_{H}^{1, 0}s|_{H, \omega}^{2}-\langle s, \sqrt{-1}\Lambda_{\omega }F_{H, \theta }s\rangle_{H}-\sqrt{-1}\Lambda_{\omega } \langle s, [\theta , \theta^{\ast H}]s\rangle_{H} \\
\geq &\ |D_{H}^{1, 0}s|_{H, \omega}^{2}-\langle s, \sqrt{-1}\Lambda_{\omega }F_{H, \theta }s\rangle_{H} \end{split} \end{equation} on $M\setminus \Sigma_{\mathcal{G}}$.
We suppose that the reflexive Higgs sheaf $(\mathcal{G}, \theta )$ admits an approximate admissible Hermitian-Einstein structure, i.e. for every positive $\delta $, there is an admissible Hermitian metric $H_{\delta}$ such that \begin{equation}
\sup _{x\in M\setminus \Sigma_{\mathcal{G}} } |\sqrt{-1}\Lambda_{\omega }F_{H_{\delta}, \theta }-\lambda(\mathcal{G}) \mathrm{Id}|_{H_{\delta}}(x)<\delta . \end{equation} If $\deg_{\omega}\mathcal{G}$ is negative, i.e. $\lambda(\mathcal{G})<0$, by choosing $\delta$ small enough, we have \begin{equation}\label{la}
\Delta_{\omega } |s|_{H_{\delta}}^{2}
\geq 2 |D_{H}^{1, 0}s|_{H_{\delta}, \omega}^{2}-\lambda(\mathcal{G})|s|_{H_{\delta}}^{2} \end{equation}
on $M\setminus \Sigma_{\mathcal{G}}$. Since every $H_{\delta }$ is admissible, by Theorem 2 in \cite{BS}, we know that $|s|_{H_{\delta}}\in L^{\infty}(M)$. Then, the inequality (\ref{la}) can be extended globally to the compact manifold $M$. So, we must have \begin{equation} s\equiv 0. \end{equation}
Assume that $(\mathcal{E}, \phi )$ admits an approximate Hermitian-Einstein structure and $\mathcal{F}$ is a saturated Higgs subsheaf of $(\mathcal{E}, \phi )$ with rank $p$. Let $\mathcal{G}=\wedge^{p}\mathcal{E}\otimes \det(\mathcal{F})^{-1}$, and $\theta $ be a Higgs filed naturally induced on $\mathcal{G}$ by the Higgs field $\phi $. One can check that $(\mathcal{G}, \theta )$ is also a reflexive Higgs sheaf which admits an approximate Hermitian-Einstein structure with constant \begin{equation} \lambda(\mathcal{G})=\frac{2p\pi}{\mathrm{Vol}(M, \omega )}(\mu_{\omega}(\mathcal{E})-\mu_{\omega}(\mathcal{F})). \end{equation} The inclusion $\mathcal{F}\hookrightarrow \mathcal{E}$ induces a morphism $\det(\mathcal{F})\rightarrow \wedge^{p}\mathcal{E} $ which can be seen as a nontrivial $\theta$-invariant holomorphic section of $\mathcal{G} $. From above, we have $\lambda(\mathcal{G})\geq 0$, so the reflexive sheaf $(\mathcal{E}, \phi )$ is $\omega$-semistable.
This completes the proof of Theorem \ref{thm 1.1}.
$\Box$ \\
\hspace{0.4cm}
\section{Limit of $\omega_{\epsilon }$-Hermitian-Einstein metrics } \setcounter{equation}{0}
Assume that the reflexive Higgs sheaf $(\mathcal{E} , \phi )$ is $\omega $-stable. It is well known that the pulling back Higgs bundle $(E, \phi )$ is $\omega_{\epsilon }$-stable for sufficiently small $\epsilon$. By Simpson's result (\cite{Si}), there exists an $\omega_{\epsilon }$-Hermitian-Einstein metric $H_{\epsilon}$ for every sufficiently small $\epsilon$.
In this section, we prove that, by choosing a subsequence and rescaling it, $H_{\epsilon}$ converges to an $\omega$-Hermitian-Einstein metric $H$ in local $C^{\infty}$-topology outside the exceptional divisor $\tilde{\Sigma }$.
As above, let $\hat{H}$ be a fixed smooth Hermitian metric on the bundle $E$ over $\tilde{M}$. By taking a constant on $H_{\epsilon}$, we can suppose that
\begin{equation}\label{det2}
\int_{\tilde{M}}\mbox{\rm tr\,} \hat{S}_{\epsilon}\frac{\omega_{\epsilon}^{n}}{n!}=\int_{\tilde{M}}\log \det(\hat{h}_{\epsilon})\frac{\omega_{\epsilon}^{n}}{n!}=0.
\end{equation}
where $\exp (\hat{S}_{\epsilon})=\hat{h}_{\epsilon}=\hat{H}^{-1}H_{\epsilon }$.
Let $H_{\epsilon } (t)$ be the long time solutions of the heat flow (\ref{DDD1}) on the Higgs bundle $(E, \phi )$ with the fixed initial metric $\hat{H}$ and with respect to the K\"ahler metric $\omega_{\epsilon}$. We set:
\begin{equation}\exp (\tilde{S}_{\epsilon}(t))=\tilde{h}_{\epsilon}(t)=H_{\epsilon}(t)^{-1}H_{\epsilon }.\end{equation}
By (\ref{1}), (\ref{det2}) and noting that $\exp (\hat{S}_{\epsilon})=\exp (S_{\epsilon}(t))\exp (\tilde{S}_{\epsilon}(t))$, we have
\begin{equation}\label{det3}
\int_{\tilde{M}}\mbox{\rm tr\,} \tilde{S}_{\epsilon}(t)\frac{\omega_{\epsilon}^{n}}{n!}=\int_{\tilde{M}}\log \det(\tilde{h}_{\epsilon}(t))\frac{\omega_{\epsilon}^{n}}{n!}=0
\end{equation}
for all $ t\geq 0$. We first give a uniform $L^{1}$ estimate of $\hat{S}_{\epsilon }$.
\begin{lemma}\label{lem 5.1} There exists a constant $\hat{C}$ which is independent of $\epsilon$, such that \begin{equation}\label{L102}
\|\hat{S}_{\epsilon }\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, \hat{H})}:= \int_{\tilde{M}}|\hat{S}_{\epsilon }|_{\hat{H}}\frac{\omega_{\epsilon}^{n}}{n!}\leq \hat{C} \end{equation} for all $0<\epsilon \leq 1$. \end{lemma}
{\bf Proof. } We prove (\ref{L102}) by contradiction. If not, there exists a subsequence $\epsilon_{i}\rightarrow 0$ such that \begin{equation}
\lim_{i\rightarrow \infty} \|\hat{S}_{\epsilon_{i} }\|_{L^{1}(\tilde{M}, \omega_{\epsilon_{i}}, \hat{H})}\rightarrow \infty . \end{equation} By (\ref{c02}), (\ref{C0a}) and (\ref{s01}), we also have \begin{equation}
\lim_{i\rightarrow \infty} \|\tilde{S}_{\epsilon_{i} }(t)\|_{L^{1}(\tilde{M}, \omega_{\epsilon_{i}}, H_{\epsilon_{i}}(t))}\rightarrow \infty , \end{equation} for all $t>0$.
By (\ref{la02}), the uniform lower bound of Green functions $G_{\epsilon}$ (\ref{H005}) and the inequalities (\ref{c02}),
we have \begin{equation}\label{L103}
\|\tilde{S}_{\epsilon }(1)\|_{L^{\infty}(\tilde{M}, H_{\epsilon}(1))}\leq \grave{C}_{1}\|\tilde{S}_{\epsilon }(1)\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, H_{\epsilon}(1))}+\grave{C}_{2}, \end{equation} where $\grave{C}_{1}$ and $\grave{C}_{2}$ are uniform constants independent of $\epsilon$ and $t$. Using the inequality (\ref{s01}) again, we have \begin{equation}\label{CM011} \begin{split}
\|\tilde{S}_{\epsilon }(t )\|_{L^{\infty}(\tilde{M}, H_{\epsilon}(t))}\leq &\ r^{2}\grave{C}_{1}(\|\tilde{S}_{\epsilon }(t)\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, H_{\epsilon}(t))}+\|S_{\epsilon }(t, 1)\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, H_{\epsilon}(1))})\\ & +r\|S_{\epsilon }(t , 1)\|_{L^{\infty}(\tilde{M}, H_{\epsilon}(1))}+r\grave{C}_{2}\\ \end{split} \end{equation}
for all $ t >0$.
Set $\tilde{u}_{i}(t)=\|\tilde{S}_{\epsilon_{i} }(t)\|_{L^{1}(\tilde{M}, \omega_{\epsilon_{i}}, H_{\epsilon_{i}}(t))}^{-1}\tilde{S}_{\epsilon_{i} }(t)$, then $\|\tilde{u}_{i}(t)\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, H_{\epsilon}(t))}=1$. By (\ref{det3}) and (\ref{CM011}), we have $\int_{\tilde{M}}\mbox{\rm tr\,} u_{i}(t) \frac{\omega_{\epsilon}^{n}}{n!}=0$ and $\|\tilde{u}_{i}(t)\|_{L^{\infty}(\tilde{M}, H_{\epsilon_{i}}(t))}\leq C(t)$. Since $H_{\epsilon} (t) \rightarrow H(t)$ locally in $C^{\infty}$-topology and $\omega_{\epsilon }$ are locally uniform bounded outside $\tilde{\Sigma }$, by the Lemma 5.4 in \cite{Si}, we can show that, by choosing a subsequence which we also denote by $\tilde{u_i}(t)$, we have $\tilde{u}_{i}(t)\rightarrow \tilde{u}(t)$ weakly in $L_{1, loc}^{2}(\tilde{M}\setminus \tilde{\Sigma }, \omega, H(t) )$, where the limit $\tilde{u}(t)$ satisfies: $\|\tilde{u}(t)\|_{L^{1}(\tilde{M}\setminus \tilde{\Sigma }, \omega, H(t))}=1$, $\int_{\tilde{M}\setminus \tilde{\Sigma }}\mbox{\rm tr\,}(\tilde{u}(t))\frac{\omega^{n} }{n!}=0$. By (\ref{CM011}), we have \begin{equation}\label{tt01}
\|\tilde{u}(t)\|_{L^{\infty}(\tilde{M}\setminus \tilde{\Sigma }, \omega, H(t))}\leq r^{2}\grave{C}_{1}.
\end{equation} Furthermore, if $\Upsilon : R\times R \rightarrow R$ is a positive smooth function such that $\Upsilon (\lambda_{1}, \lambda_{2})< (\lambda_{1}- \lambda_{2})^{-1}$ whenever $\lambda_{1}>\lambda_{2}$, then
\begin{equation}\label{tt02}
\begin{split}
&\int_{\tilde{M}\setminus \tilde{\Sigma }}\mbox{\rm tr\,} (\tilde{u}(t)\sqrt{-1}\Lambda_{\omega }(F_{H(t), \phi })) + \langle\Upsilon (\tilde{u}(t))(\overline{\partial }_{\phi }\tilde{u}(t)), \overline{\partial }_{\phi }\tilde{u}(t)\rangle_{H(t)}\frac{\omega^{n} }{n!}\\&\leq 0.\\
\end{split}
\end{equation}
Since $M\setminus \Sigma $ is biholomorphic to $\tilde{M}\setminus \tilde{\Sigma }$, and $\mathcal{E}$ is locally free on $M\setminus \Sigma $, $\tilde{u}(t)$ can be seen as an $L_{1}^{2}$ section of $\mathrm{End}(\mathcal{E})$. By the same argument as that in section 4 (the proof of (\ref{semi02})), we can show that, by choosing a subsequence $t\rightarrow 0$, we have $\tilde{u}(t) \rightarrow \tilde{u}_{0}$ weakly in local $L_{1}^{2}$, where $\tilde{u}_{0}$ satisfies
\begin{equation}\int_{M}\mbox{\rm tr\,}(\tilde{u}_{0})\frac{\omega^{n} }{n!}=0, \quad \|\tilde{u}_{0}\|_{L^{1}(M\setminus \Sigma , \omega , \hat{H})}=1, \quad \|\tilde{u}(t)\|_{L^{\infty}(M\setminus \Sigma , \hat{H})}\leq r^{2}\grave{C}_{1}.\end{equation}
and \begin{equation}\label{semi022}
\int_{M\setminus \Sigma }\mbox{\rm tr\,} (\tilde{u}_{0}\sqrt{-1}\Lambda_{\omega }F_{\hat{H}, \phi }) + \langle\Upsilon(\tilde{u}_{0})(\overline{\partial }_{\phi}\tilde{u}_{0}), \overline{\partial }_{\phi}\tilde{u}_{0}\rangle_{\hat{H}}\frac{\omega^{n} }{n!}\leq 0.
\end{equation}
Now, by Simpson's trick (P888 in \cite{Si}), we can construct a saturated Higgs subsheaf $\mathcal{F}$ of $(\mathcal{E}, \phi )$ with $\mu_{\omega }(\mathcal{F})\geq \mu_{\omega }(\mathcal{E})$, which contradicts with the stability of $(\mathcal{E}, \phi )$.
$\Box$ \\
{\bf Proof of Theorem \ref{thm 1.2} } Since $\|\hat{S}_{\epsilon }\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, \hat{M})}$ are uniformly bounded, by (\ref{c02}), (\ref{C0a}) and (\ref{s01}), there also exists a uniform constant $\grave{C}_{3}$ such that \begin{equation}
\|\tilde{S}_{\epsilon }(1)\|_{L^{1}(\tilde{M}, \omega_{\epsilon}, H_{\epsilon}(1))}\leq \grave{C}_{3}. \end{equation} By (\ref{L103}), we have \begin{equation}\label{L104}
\|\tilde{S}_{\epsilon }(1)\|_{L^{\infty}(\tilde{M}, H_{\epsilon}(1))}\leq \grave{C}_{1}\grave{C}_{3}+\grave{C}_{2} \end{equation}
for all $0< \epsilon \leq 1$. By the local estimate (\ref{C01}) in Lemma \ref{lem 2.3}, we see that there exists a constant $\tilde{C}_{0}(\delta^{-1})$ independent of $\epsilon$ such that
\begin{equation}
|\hat{S}_{\epsilon }|_{\hat{H}}(x)\leq \tilde{C}_{0}(\delta^{-1}) \end{equation} for all $x\in \tilde{M}\setminus B_{\omega_1}(\delta)$ and all $0<\epsilon \leq 1$. Since $H_{\epsilon}$ satisfies the $\omega_{\epsilon}$-Hermitian-Einstein equation (\ref{HE}), by the same argument as that in Lemmas \ref{lem 2.4} and \ref{lem 2.5} in section 2, we have uniform higher-order estimates for $h_{\epsilon }$, i.e. there exist constants $\tilde{C}_{k}(\delta^{-1})$ independent of $\epsilon$, such that \begin{equation}
\|\hat{h}_{\epsilon}\|_{C^{k+1,\alpha}, \tilde{M}\setminus B_{\omega_1}(2\delta)}\leq \tilde{C}_{k+1}(\delta^{-1}) \end{equation} for all $k\geq 0$ and all $0<\epsilon \leq 1$. So by choosing a subsequence, we have $H_{\epsilon} $ converges to a Hermitian metric $H $ on $M\setminus \Sigma $ in locally $C^{\infty}$-topology, and $H$ satisfies the Hermitian-Einstein equation, i.e. \begin{equation} \sqrt{-1}\Lambda_{\omega }(F_{H}+[\phi , \phi ^{\ast H}])=\lambda \mathrm{Id}_{\mathcal{E}}. \end{equation}
By (\ref{L104}), we see that the metrics $H(1)$ and $H$ are mutually bounded each other on $\mathcal{E}|_{M\setminus \Sigma}$. On the other hand, we have shown that $|\phi|_{H(1), \omega}\in L^{\infty}(M)$ in section 3, then $|\phi |_{H, \omega}$ also belongs to $L^{\infty}(M)$. This implies that $|\Lambda_{\omega }(F_{H})|_{H}$ is uniform bounded on $M\setminus \Sigma $. By (\ref{CW22}), it is easy to see that $|F_{H}|_{H, \omega}$ is square integrable. So we know that the metric $H$ is an admissible Hermitian-Einstein metric on the Higgs sheaf $(\mathcal{E}, \phi )$. This completes the proof of Theorem \ref{thm 1.2}.
$\Box$ \\
\hspace{0.3cm}
\end{document} | arXiv |
\begin{document}
\title{Measurement dependent locality} \author{Gilles P\"utz} \affiliation{University of Geneva} \affiliation{ETH Zurich} \author{Nicolas Gisin} \affiliation{University of Geneva} \date{\today}
\begin{abstract} The demonstration and use of Bell-nonlocality, a concept that is fundamentally striking and is at the core of applications in device independent quantum information processing, relies heavily on the assumption of measurement independence, also called the assumption of free choice. The latter cannot be verified or guaranteed. In this paper, we consider a relaxation of the measurement independence assumption. We briefly review the results of \cite{Putz14}, which show that with our relaxation, the set of so-called measurement dependent local (MDL) correlations is a polytope, i.e. it can be fully described using a finite set of linear inequalities. Here we analyze this polytope, first in the simplest case of 2 parties with binary inputs and outputs, for which we give a full characterization. We show that partially entangled states are preferable to the maximally entangled state when dealing with measurement dependence in this scenario. We further present a method which transforms any Bell-inequality into an MDL inequality and give valid inequalities for the case of arbitrary number of parties as well as one for arbitrary number of inputs. We introduce the assumption of independent sources in the measurement dependence scenario and give a full analysis for the bipartite scenario with binary inputs and outputs. Finally, we establish a link between measurement dependence and another strong hindrance in certifying nonlocal correlations: nondetection events. \end{abstract}
\maketitle \section{Introduction}
For decades after the advent of quantum mechanics, the question of whether hidden parameters could explain the correlations between entangled particles remained open. John Bell~\cite{Bell1964} finally answered the question in the negative: no local explanation for the correlations can hold. Explaining this fact to physicists even nowadays leads to a surprised raising of the eyebrows as the concept of nonlocality at least at first seems very counterintuitive, and many will try to argue one way or the other that this cannot be true. And while this fact alone makes nonlocality a highly interesting subject to study for our understanding of fundamental physics, it becomes even more important after noticing that it can be used in modern day applications. The fact that nonlocal correlations have properties like inherent randomness~\cite{Colbeck2006} and monogamy of correlations gave rise to the field of device independent quantum information processing~\cite{Barrett05,Brunner:RMP}: certain tasks in information processing can be accomplished with minimal assumptions on the devices used. In fact the user of a device may not need to trust or understand the inner workings of the device itself and still guarantee that the task is completed successfully purely based on the observed correlations. This has found uses for example in cryptography, specifically quantum key distribution~\cite{Ekert91,BarrettKent05,Acin06}, randomness generation, specifically randomness expansion~\cite{Pironio2010} and amplification~\cite{Colbeck2012} and other quantum information processing tasks, like entanglement certification\cite{Bancal11,Barreiro13}.\\
Let us start by restating the precise formulation of locality. Consider the following scenario: two parties, Alice and Bob, separate in their respective labs, each receive a particle from a source situated between them. Then, each of them chooses a measurement from a list of possible measurements and perform this measurement on the particle and record the outcome. They will not use any information about the state of the particle or the precise measurements, instead they just record the labels of the measurements, henceforth called the input, as well as the respective outcomes. We denote by $x$ and $y$ the input and by $a$ and $b$ the outcome of Alice and Bob, respectively. Furthermore, we denote by $\lambda$ the possible common pasts that both particles share\footnote{The mathematically precise notation will be introduced in section \ref{technicalities}.}. Using nothing but the properties of probability distributions, see section \ref{secprobabilities}, we have that \begin{align}
p(abxy)=\int\mathrm{d}\lambda\rho(\lambda)p(xy|\lambda)p(ab|xy\lambda). \end{align}
We can now introduce the three assumptions that comprise the concept of locality: \begin{itemize} \item \textit{Outcome independence:} Any correlation between the outcomes comes from their inputs as well as their common past: \\
$p(ab|xy\lambda)=p(a|xy\lambda)p(b|xy\lambda)$ \item \textit{Parameter independence:} The outcome on one side cannot depend on the input of the other side:\\
$p(a|xy\lambda)=p(a|x\lambda)$ and $p(b|xy\lambda)=p(b|y\lambda)$ \item \textit{Measurement independence:} The inputs were chosen independently of the common past of the particles:\\
$p(xy|\lambda)=p(xy)$ \end{itemize}
Together, and dividing both sides by $p(xy)$, we find the usual definition of local correlations \begin{align} \label{local}
p(ab|xy)=\int\mathrm{d}\lambda\rho(\lambda)p(a|x\lambda)p(b|y\lambda). \end{align} While we do not go into the details of the proof here, it is important to note that for a fixed number of inputs and outputs on each side, these correlations form a convex polytope, a convex geometric structure with a finite set of vertices. A convex polytope has the useful property that it can, equivalently, be described by a set of linear inequalities instead of by its vertices. Within the context of local probability distributions, these inequalities are called Bell inequalities. The most well-known example of such an inequality is given in the scenario where Alice and Bob choose from 2 possible inputs ($0$ and $1$) and receive binary outcomes: the CHSH Bell inequality\cite{Clauser1969}. We rewrite it here in the form, equivalent under nosignaling, first introduced by Eberhard~\cite{Eberhard93} \begin{align} \label{Eberhard}
p(00|00)-p(01|01)-p(10|10)-p(00|11)\leq 0. \end{align} Any local distribution, meaning any probability distribution that can be written in the form (\ref{local}) satisfies this inequality.\\
The interesting part is that there are correlations which can be realized by quantum mechanics that violate this inequality, which entails certain interesting properties independently of how exactly the correlations were realized. These properties are based on the fact that no common past can explain the correlations. It is therefore important to ensure that if such a common past were to exist, it would have to satisfy the outcome independence, parameter independence and measurement independence assumptions. When it comes to outcome and parameter independence, this is usually not seen as a problem: if we assume that information transfer needs a physical carrier, then we can in principle enforce outcome and parameter independence by performing the experiment in spacelike separation. Failing that, at least when it comes to applications it is usually still reasonable to assume that no signal is transmitted between the devices of Alice and Bob. The reason for this is that the tasks that nonlocal correlations are useful for are mostly linked to some form of privacy, like cryptography. A transmitter in the device would render such a task inherently impossible. Overall, it is fair to say that the parameter independence assumption is well-motivated . It is assumed to hold for the rest of this work.
The same can however not be said for the assumption of measurement independence. From a foundational perspective, a total independence of the input-choice $x$ and $y$ and the common past $\lambda$ of the particles may seem acceptable, but is still an uncomfortable assumption that cannot be enforced. When it comes to applications it gets even worse: if we consider the common past $\lambda$ to represent the influence of some adversary in a privacy related information theoretic task, then the measurement independence assumption means that for some reason this adversary has no way at all to influence the random number generators used to generate the inputs. We have to assume that these are fully safe and unpredictable. Weakening this assumption is the focus of this work.
Unfortunately, it is not possible to fully eliminate the measurement independence assumption. Without it, outcome independent and parameter independent distributions can reproduce all quantum correlations and they become useless within this framework. We can however weaken the assumption as much as possible by replacing it with a less restrictive form, see figure \ref{figmdlscenario}. Such a weakening has been studied before in different works~\cite{Hall2011,Barrett2011,Thinh2013}, but here we will take a slightly different approach. Specifically, for fixed parameters $\ell$ and $h$ with $[\ell,h]\subsetneqq [0,1]$, we replace assumption 3 by \begin{align} \label{md}
\exists\ell,h\text{ s.t. }\ell\leq p(xy|\lambda)\leq h\text{ }\forall x,y,\lambda. \end{align} In other words we do allow the common past $\lambda$ to influence the inputs, but only to the degree that each inputpair $(x,y)$ has still at least a probability of $\ell$ and at most a probability of $h$ to appear. This particular constraint measures the amount of randomness in the inputs by limiting their deviation from uniform even when conditioned on the common past $\lambda$. It corresponds to the single-shot version of the Santha-Vazirani condition \begin{align}
\epsilon\leq p(x_i|x1\ldots x_{i-1}\lambda)\leq 1-\epsilon, \end{align} which is the constraint usually considered for the cases of randomness amplification~\cite{Colbeck2012}. We thus study the correlations that are of the form \begin{align} \label{mdl}
p(abxy)=\int\mathrm{d}\lambda\rho(\lambda)p(xy|\lambda)p(a|x\lambda)p(b|y\lambda) \end{align} and fulfill assumption (\ref{md}). We call these correlations \textit{measurement dependent local}. We introduced them originally in \cite{Putz14} where we showed that they form, like the local correlations, a polytope. We also demonstrated that, maybe surprisingly, there are quantum correlations that cannot be reproduced by such measurement dependent local correlations as long as $\ell>0$. This means that, even though the measurement independence assumption cannot simply be removed, it can be made arbitrarily weak.
\begin{figure}
\caption{A bipartite scenario with measurement dependent local hidden variables. A source emits particles carrying the common information $\Lambda$ towards the experimenters. The latter then choose measurements labeled by $X$ and $Y$ respectively and get outcomes $A$ and $B$. The choise of measurements is correlated to the particles common past $\Lambda$.}
\label{figmdlscenario}
\end{figure}
In the rest of the present work, we study these measurement dependent local correlations in different settings. We first clarify notations and definitions and state the technical results from which our results follow in section \ref{technicalities}. We then briefly recapitulate the results of \cite{Putz14}, before turning to new results starting with a more extensive study of the scenario with 2 parties with binary inputs and outputs. In section \ref{secgeneral}, we show that any Bell-inequality can be transformed into a measurement dependent local inequality. We use this result to study the scenario of $N$ parties with binary inputs and outputs. The generalised scenario of 2 parties with arbitrary inputs and outputs is also considered, and we show that even $\ell=0$ still does not allow to reproduce all quantum mechanical correlations. We introduce the additional reasonable assumption of independent sources and study the resulting correlations in section \ref{secindsources}. Finally, we introduce the concept of limited detection locality and show how detection inefficiencies can be linked to measurement dependence in section \ref{LDL}.
\section{Technicalities} \label{technicalities} In this section we introduce the mathematical concepts used in this paper, clarify notations and definitions and state the technical theorems from which several of results follow.
\subsection{Polytopes} \label{polytopes} A polytope is a convex geometric structure, which we write as a set of vectors, that has a finite number of extremal points, called vertices. Since it is convex it is fully described by the set of extremal points, all other points that are part of the structure are given by convex combinations of the vertices. Equivalently, a polytope can be described by a finite set of hyperplanes which are such that all the points inside the polytope lie on one side of each hyperplane. These are called the facets of the polytope. See figure \ref{figpolytope} for an example in 2 dimensions.\\
\begin{figure}
\caption{A polytope in 2 dimensions. The set of vertices is given by $\mathcal{V}=\{v_i\}_{i=1}^5$ and the facets are given by the solid black lines. In 2 dimensions, every facet is given by 2 vertices. This is however not the case in general dimensions. To determine whether a point lies within the polytope, we check that it lies on the right side of all the facets, as it is the case with the point $p$. The point $q$ lies on the wrong side of the facet $(v_2v_3)$ and is thus not part of the polytope. It is often computationally difficult to find all the facets. Hyperplanes like the red dashed line can in such cases still be useful, as it would still allow one to determine that the point $q$ is not a member of the polytope.}
\label{figpolytope}
\end{figure}
\textit{Vertices:} Given a polytope $\mathcal{P}$, we denote by $\mathcal{V_P}=\{v_\mathcal{P}^i\}_{i=1}^n$ the set of its vertices. Every point inside the polytope can be written as a convex combination of these vertices, $p\in\mathcal{P}\Leftrightarrow p=\sum_i\alpha_i v_\mathcal{P}^i$ with $\alpha_i\geq 0$, $\sum_i\alpha_i=1$. Besides being able to generate every point of the polytope, the set of vertices can be useful when optimizing convex quantities: any convex function from elements of the polytope to $\mathbb{R}$ will take its global maximum at at least one of the vertices. The same holds for the global minimum.\\
\textit{Facets:} We denote by $\mathcal{F_P}=\{(f_i,B_i)\}_{i=1}^m$ the set of facets of the polytope. A facet is a hyperplane and thus described by a linear equality of the form $f\cdot p= B$, where $\cdot$ denotes the scalar product. All the points of the polytope lie on the same side of each facet, i.e. $p\in\mathcal{P}\Rightarrow f\cdot p\leq B$\footnote{Technically, we could also have $p\in\mathcal{P}\Rightarrow f\cdot p\geq B$. In this case we simply set $f\rightarrow -f$ and $B\rightarrow -B$ since this does not change the equation of the hyperplane $f\cdot p=B$. We can thus choose $\leq$ by convention.} Additionally, if a point lies on the correct side of all the facets, then it follows that it is a member of the polytope, such that we have $f\cdot p\leq B$ $\forall (f,B)\in\mathcal{F_P}\Rightarrow p\in\mathcal{P}$. The facets are useful to determine whether a point lies inside the polytope or not since one only needs to check whether or not it respects all the inequalities given by $\mathcal{F_P}$.\\
It is possible to find $\mathcal{F_P}$ given $\mathcal{V_P}$ and vice versa. We will not elaborate on how this is done here, it suffices to say that if can be done using programs like, for example, the porta-library~\cite{Porta}. Unfortunately, it is computationally expensive to perform this transformation. It is therefore useful to establish inequalities that, even though they are not facets, hold for all the elements of the polytope. A violation of such an inequality then implies that a point is not a member of the polytope. Within the context of nonlocality, randomness generation and similar, we are usually interested in certifying precisely such non-memberships.\\
In \cite{Putz14}, we proved a general theorem on polytopes: Given two polytopes, they can be combined in a specific way such that the resulting structure forms again a polytope. If the vertices of the two composing polytopes are known, then so are the vertices of the resulting polytope. The measurement dependent local correlations we study in this paper form such a polytope.
\begin{thm} \label{polytopethm} Let $\mathcal{P}\subset\mathbb{R}^{m'n}$ and $\mathcal{Q}\subset\mathbb{R}^{mn}$ be two polytopes. Let the components of the vectors of $\mathcal{P}$ be labelled by $p(k,l)$ and the ones of $\mathcal{Q}$ by $q(k,l')$ where $k=1\ldots n$, $l=1\ldots m$ and $l'=1\ldots m'$. Let \begin{align} \mathcal{R}=\big\{r\in\mathbb{R}^{mm'n} : &\exists\Lambda\text{ measureable set},\\ &\exists\rho:\Lambda\rightarrow [0,1]\text{ with }\int_\Lambda\mathrm{d}\lambda\rho(\lambda)=1\nonumber\\ &\exists \{p_\lambda\}_{\lambda\in\Lambda}\subset\mathcal{P}, \quad\exists \{q_{\lambda}\}_{\lambda\in\Lambda}\subset\mathcal{Q} \text{ s.t.}\nonumber\\ &r(k,l,l')=\int\mathrm{d}\lambda\rho(\lambda)p_\lambda(k,l)q_{\lambda}(k,l')\quad \forall k\in\{1\ldots n\},l\in\{1\ldots m\},l'\in\{1\ldots m'\}\big\}.\nonumber \end{align} Let $\mathcal{V_P}$ and $\mathcal{V_Q}$ be the sets of vertices of $\mathcal{P}$ and $\mathcal{Q}$ respectively.\\ Then $\mathcal{R}$ is a polytope and its set of vertices $\mathcal{V_R}$ fulfils \begin{align}
\mathcal{V_R}\subseteq\big\{v_\mathcal{R}^{ij} : v_\mathcal{R}^{ij}(k,l,l')=v_\mathcal{P}^i(k,l)v_\mathcal{Q}^j(k,l')\quad i\in\{1\ldots |\mathcal{V_P}|\},j\in\{1\ldots |\mathcal{V_Q}|\}\big\}. \end{align} \end{thm}
The proof of this theorem can be found in the appendix. Note that it is unproblematic that the theorem only shows that the vertices of $\mathcal{R}$ are a subset of the set of combined vertices; all of the points in the set of combined vertices, of which there are a finite amount, lie inside the polytope, they may just not be vertices themselves. These nonextremal points can easily be eliminated by checking for each of the elements of the set of combined vertices whether it can be written as a convex combination of the others.\\
\subsection{Random variables and probabilities} \label{secprobabilities} As already apparent in the introduction, this work relies heavily on probability distributions. We therefore define them here.
\textit{Probability distribution or density:} A probability distribution is a function $p$ from some discrete set $S$ into $[0,1]$ such that \begin{align} \label{positivity} p(s)&\geq 0\quad\forall s\in S\\ \label{normalization} \sum_{s\in S}p(s)&=1. \end{align} Analogously, a probability density is a function $\rho$ from a non-discrete measurable set $C$ into $[0,1]$ such that \begin{align} \rho(c)&\geq 0\quad\forall c\in C\\ \int_C\mathrm{d} c\rho(c) &=1. \end{align} Throughout the paper, we will denote distributions over a discrete set by $p$ and densities over a (potentially) non-discrete set by $\rho$.
\textit{Random variable:} A random variable $V$ is a variable taking values in a given measurable set, called the alphabet, with an associated probability distribution. In the following, we use capital letters to denote random variables and the corresponding lower case letters to denote the values of the variables. The probability distribution over the random variable $V$ is denoted by $p_V$ and the probability of $V$ taking the value $v$ by $p_V(v)$. To ease notation, we will often just write $p(v)$ if the random variable is clear. The same goes for the case of continuous measurable alphabets and probability densities.\\
\textit{Joint distribution:} Given two random variables $V$ and $W$, a joint probability distribution $p_{VW}$ may be defined. It satisfies \begin{align} p_V(v)=\sum_{w}p_{VW}(vw) \end{align} and similarly for $W$. $p_V$ and $p_W$ are called the marginals of $p_{VW}$.\\
\textit{Conditional distribution:} Given two random variables $V$ and $W$ with a joint distribution $p_{VW}$, we define the probability distribution over $V$ conditioned on $W$ by \begin{align}
p_{V|W}(v|w)=\frac{p_{VW}(vw)}{p_W(w)}. \end{align}
This is known as Bayes' rule. $p_{V|W}(v|w)$ represents the probability of $V$ taking value $v$ given the knowledge that $W$ took value $w$.
\subsection{Locality, nonsignaling and measurement dependence} \label{lnsmdl} Let us now get back to the topic of this paper. We consider a fixed number of distinct parties between which particles are distributed from a common source. The parties then perform different possible measurements, labelled by an input, on these particles and receive an outcome each.\\
\textit{Scenario:} A scenario is defined by a vector of integers $S=\big(N,\{n_i\}_{i=1}^N,\{\{m_i^j\}_{j=0}^{n_i-1}\}_{i=1}^N\big)$, which represent, in order, the number of parties in the scenario, the number of inputs for each party and the number of outcomes for each input of each party. Very often we consider simplified scenarios where, for example, each party has the same number of inputs or each input has the same number of outputs. We use a simplified notation in this case. For example the notation $(N,n,m)$ denotes the scenario of $N$ parties, each having $n$ inputs and $m$ outcomes for each input.\\
\textit{Random variables of the scenario:} We denote by $X_i$ the random variable modelling the input of the $i$-th party, with alphabet $\{0\cdots n_i-1\}$. For the outputs of the $i$-th party, we define the random variable $A_i$ with alphabet $\{0\cdots\max_j m_i^j-1\}$ with the property that the corresponding probability distribution satisfies $p_{A_iX_i}(ax)=0$ if $a\geq m_i^x$. This last condition is due to the fact that for input $x$ only $m_i^x$ different outcomes can occur. Additionally, we denote by $\Lambda$ the random variable describing the common past of the particles. We do not restrict its alphabet.\\
In the following, we define interesting sets of probability distributions. Since these sets will all contain probability distributions only, we do not specify it again in the definition of the sets.\\
\textit{Locality:} Local correlations have been presented in the introduction. They arise from the assumptions of outcome independence, parameter independence and measurement independence. The set of local correlations for a fixed scenario $S$ is defined by \begin{align} \label{localpolytope}
\mathcal{L}_{S}=\big\{p_{\{A_i\}_{i=1}^N|\{X_i\}_{i=1}^N} : &\exists \Lambda \text{ random variable and }\rho_{\{A_i\}_{i=1}^N\Lambda|\{X_i\}_{i=1}^N}\\
&p(\{a_i\}_{i=1}^N|\{x_i\}_{i=1}^N)=\int_\Lambda\mathrm{d}\lambda\rho(\lambda)\prod_{i=1}^Np(a_i|x_i\lambda)\big\}.\nonumber \end{align} Whenever the scenario $S$ is clear, we will omit it to ease notation. It is a well-known fact that the set of local correlations for a fixed scenario forms a polytope, the vertices of which are given by \begin{align}
\mathcal{V_L}=\big\{v_{\{A_i\}_{i=1}^N|\{X_i\}_{i=1}^N} : &\forall i\in\{1\ldots N\}\forall x_i\in\{0\ldots n_i-1\} \exists! a_i^{x_i}\text{ s.t. }\\
&v(\{a_i\}_{i=1}^N|\{x_i\}_{i=1}^N)=\prod_i\delta_{a_i=a_i^{x_i}}\big\}.\nonumber \end{align} These are called the determinstic points since the corresponding probability distributions are deterministic, i.e. only take values $0$ or $1$. It is worth to note that this could also have been proven using our polytope-theorem \ref{polytopethm} iteratively. \\
\textit{Nosignaling:} Nonsignaling distributions have the properties of parameter independence and measurement independence, but do not satisfy outcome independence. The name comes from the fact that this is the maximal set of distributions that cannot be used to transmit a signal among the parties, i.e. the outcome of party $i$ does not reveal any information about the inputs of the other parties. For a given scenario $S$, that again will usually be omitted in the rest of the paper, we define the set of nonsignaling distributions by \begin{align} \label{nspolytope}
\mathcal{NS}_{S}=\big\{p_{\{A_i\}_{i=1}^N|\{X_i\}_{i=1}^N} : p(a_j|\{x_i\}_{i=1}^N)=p(a_j|x_j)\quad\forall j,a_j,\{x_i\}_{i=1}^N\big\}. \end{align}
This set forms again a polytope, which can be seen by the fact that it is only subject to linear constraints in the form of the positivity and normalization requirement of probability distributions as well as the nonsignaling condition $p(a_j|\{x_i\}_{i=1}^N\lambda)=p(a_j|x_j\lambda)$. The set of all quantum correlations is nonsignaling and therefore included in the nonsignaling polytope. One of the main reasons to study nonsignaling correlations is due to the fact that the set of quantum correlations is difficult to characterize compared to the nonsignaling polytope.\\
\textit{Measurement dependent locality:} The idea behind measurement dependent local distributions has already been introduced in the introduction. Formally, we consider a fixed scenario $S=\big(N,\{n_i\}_{i=1}^N,\{\{m_i^j\}_{j=1}^{n_i}\}_{i=1}^N\big)$ as well as fixed parameters $\ell$ and $h$, which will usually be omitted from notation in the following, with \begin{align} \label{lhbounds} \max(1-\big((\prod_{i=1}^Nn_i)-1\big)h,0)\leq\ell\leq \frac{1}{\prod_{i=1}^Nn_i}\leq h\leq 1-\big((\prod_{i=1}^Nn_i)-1\big)\ell. \end{align} The set of measurement dependent local correlations for these $\ell$ and $h$ is then given by \begin{align} \label{mdlpolytope} \mathcal{MDL}_S(\ell,h)=\big\{p_{\{A_i\}_{i=1}^N\{X_i\}_{i=1}^N} : &\exists \Lambda \text{ random variable and }\rho_{\{A_i\}_{i=1}^N\Lambda\{X_i\}_{i=1}^N}\\
&p(\{a_i\}_{i=1}^N\{x_i\}_{i=1}^N)=\int_\Lambda\mathrm{d}\lambda\rho(\lambda)p(\{x_i\}_{i=1}^N|\lambda)\prod_{i=1}^Np(a_i|x_i\lambda)\nonumber\\
&\ell\leq p(\{x_i\}_{i=1}^N|\lambda)\leq h\quad\forall \{x_i\}_{i=1}^N,\lambda\big\}.\nonumber \end{align}
The bounds on $\ell,h$ given by (\ref{lhbounds}) follow from the fact that probability distributions are positive (\ref{positivity}) and normalised (\ref{normalization}), making parameters $(\ell,h)$ that do not respect the denoted bounds either trivially satisfied (e.g. $\ell< 0$) or impossible to satisfy (e.g. $\ell> \frac{1}{\prod_{i=1}^Nn_i}$). The set of input-distributions for a given $\lambda$, i.e. the set of the possible $p(\{x_i\}_{i=1}^N|\lambda)$ is given by \begin{align} \mathcal{I}(\ell,h)=\big\{ p_{\{X_i\}_{i=1}^N} : \ell\leq p(\{x_i\}_{i=1}^N)\leq h\quad \forall \{x_i\}_{i=1}^N\big\}.\nonumber \end{align} Since the constraints are all linear, this is a polytope. The vertices are the points that saturate as many of the inequalities as possible\footnote{Any point that does not can be written as a convex combination of two points which saturate all the inequalities that the point does as well as at least one more.}. Defining $t=\lfloor\frac{(\prod_in_i)h-1}{h-l}\rfloor$ the vertices are therefore given by \begin{align} \label{Vi} \mathcal{V_I}(\ell,h)=\big\{v_{\{X_i\}_{i=1}^N} :& \exists\pi\text{ permutation s.t. } \\ &v=\pi\big((\underbrace{\ell\ldots\ell}_{t},\underbrace{h\ldots h}_{(\prod_in_i) -t-1},1-t\ell-((\prod_in_i) -t-1)h)\big)\big\}.\nonumber \end{align}
Our analysis is made possible by the fact that the measurement dependent local distributions, just like the local and nonsignaling distributions, form a polytope. \begin{crl} \label{mdlthm} For fixed $S$, $\ell$ and $h$, $\mathcal{MDL}_S(\ell,h)$ forms a polytope. Its vertices are given by \begin{align} \label{Vmdl}
\mathcal{V_{MDL}}(\ell,h)\subseteq\big\{v^{ij}_{\{A_i\}_{i=1}^N,\{X_i\}_{i=1}^N} : &v^{ij}(\{a_i\}_{i=1}^N,\{x_i\}_{i=1}^N)=v^{i}_\mathcal{I}(\{x_i\}_{i=1}^N)v^j_\mathcal{L}(\{a_i\}_{i=1}^N|\{x_i\}_{i=1}^N)\nonumber\\ &v^{i}_\mathcal{I}(\{x_i\}_{i=1}^N)\in \mathcal{V_I}(\ell,h),\quad v^{j}_\mathcal{L}\in \mathcal{V_L}\big\}_{i,j}. \end{align} \end{crl} This is a direct corollary of the polytope-theorem \ref{polytopethm}, since the measurement dependent set is obtained by combining the input polytope $\mathcal{I}(\ell,h)$ with the local polytope $\mathcal{L}$.
It is interesting to note that the MDL-polytope is indeed not a subset of the nonsignaling polytope. The fact that the hidden variable is correlated with the input of one party and the outcome of the others permits to establish correlations between them, in other words it permits to signal.
\section{The (2,2,2) scenario} \label{222} We consider here the simplest case of $2$ parties with binary inputs and outputs on each side. To simplify notation, we call these parties Alice and Bob and label their inputs by $X$ and $Y$ and their outputs by $A$ and $B$ respectively. For fixed $\ell$ and $h$, we want to know whether a given correlation could be explained by measurement dependent local (MDL) resources, i.e. whether it is of the form \begin{align} \label{mdlcorr}
p(abxy)=\int\mathrm{d}\lambda\rho(\lambda)p(xy|\lambda)p(a|x\lambda)p(b|y\lambda)\\
\ell\leq p(xy|\lambda)\leq h\quad \forall x,y,\lambda\nonumber \end{align} for fixed $\ell$ and $h$ with $[\ell,h]\subsetneqq [0,1]$. In section \ref{technicalities}, we fully characterized the set of MDL correlations by showing that they can all be written as a convex combination of a finite set of vertices, i.e. they form a polytope. Since we are mostly interested in determining membership, finding inequalities respected by all MDL correlations is of interest.\\
Let us first recapitulate the results of our previous paper~\cite{Putz14}. Using the polytope structure, we showed that all MDL correlations satisfy the inequality \begin{align} \label{goldenineq} \ell p(0000)-h\big(p(0101)+p(1010)+p(0011)\big)\leq 0. \end{align} This inequality turns out to be very useful when looking at quantum correlations. In fact, maybe surprisingly, there exist quantum correlations which can violate this inequality $\forall\ell>0$ and $\forall h$. To this end, we need to find quantum correlations that satisfy \begin{align*} p(0000)>0,\quad p(0101)=p(1010)=p(0011)=0. \end{align*} This is a formulation of Hardy's paradox~\cite{Hardy93}, which was shown to be realizable using any partially entangled pure 2-qubit state. For example, it can be realized using the 2-qubit state \begin{align} \label{goldenstate} \ket{\Psi}=\frac{1}{\sqrt{3}}\big(\ket{01}+\ket{10}-\ket{11}\big) \end{align} if Alice and Bob both measure in the basis $\left\{\frac{\ket{0}+\ket{1}}{\sqrt{2}},\frac{\ket{0}-\ket{1}}{\sqrt{2}}\right\}$ for input $0$ and in the basis $\left\{\ket{0},\ket{1}\right\}$ for input $1$. This state was implemented experimentally by our collaborators~\cite{Aktas2015}\footnote{Note that in the paper the state (\ref{goldenstate}) is written in its Schmidt basis.}, which due to noise limitations violated inequality (\ref{goldenineq}) $\forall\ell>0.090$.\\
Let us stress this result: Measurement independence was a crucial assumption in deriving nonlocality, and abandoning it would allow local hidden variables to explain quantum mechanics. We show here that if an arbitrarily small amount of measurement independence can be guaranteed, then every pure 2-qubit state, except for product states and the maximally entangled state, can produce correlations which cannot be reproduced by measurement dependent local resources. In other words, quantum mechanics resists arbitrary lack of measurement independence, sometimes also called lack of free choice. Besides the immediate implication that the fundamental explanation of nature cannot be limited measurement dependent local correlations, it is of interest that it is possible to make use of the properties of nonlocal correlations even when measurement independence is not guaranteed, which could be of use for the tasks of randomness generation or quantum cryptography.\\
In the following we perform an analysis of the MDL-polytope more complete than the one presented in \cite{Putz14}. In section \ref{polytopes}, we explained that when it comes to the question of membership, i.e. whether a certain point lies within a set, polytopes have the useful property that they can be fully described by a set of linear inequalities called facets. Any point satisfying all the inequalities lies within the set, any point violating any of the inequalities lies outside. The scenario of 2 parties with binary inputs and outputs is dimensionally small enough that we managed to solve the polytope (i.e. find all its facets, cf. section \ref{polytopes}) for fixed values of $\ell$ and $h$, from which we rederived the inequalities as a function of $\ell$ and $h$. While we can easily check that the inequalities are valid for the given range of $\ell$ and $h$ considered, we cannot guarantee that in fact there are no new facets appearing. We do conjecture, however, that this is not the case and that the lists of facets are actually complete. We write the inequalities in the form \begin{align} \sum_{abxy}\beta_{ab}^{xy}p(abxy)\leq 0. \end{align} Due to symmetries, we can classify the facets into families. Given one member of a family, the others can be found by exchanging the parties ($\beta_{ab}^{xy}\rightarrow\beta_{ba}^{yx}$), flipping Alice's or Bob's input ($\beta_{ab}^{xy}\rightarrow\beta_{ab}^{(x\oplus 1)y}$ or $\beta_{ab}^{xy}\rightarrow\beta_{ab}^{x(y\oplus 1)}$), flipping Alice's or Bob's outcome ($\beta_{ab}^{xy}\rightarrow\beta_{(a\oplus 1)b}^{xy}$ or $\beta_{ab}^{xy}\rightarrow\beta_{a(b\oplus 1)}^{xy}$) or any combination thereof.
We consider the following three cases: \begin{itemize}
\item $\ell=h=\frac{1}{4}$: In this case it follows that $p(xy|\lambda)=p(xy)=\frac{1}{4}$, and we are therefore back to full measurement independence. This case thus corresponds to the standard local polytope. \item $\ell>0$, $h=1-3\ell$: Only a lower bound is imposed, the bound on $h$ follows from (\ref{lhbounds}). For this case, we find $74$ facets, which can be found in table B.1. \item $\frac{1}{4}<h<\frac{1}{3}$, $\ell=1-3h$: Only an upper bound is imposed, the lower bound $\ell$ follows from (\ref{lhbounds}). We find 93 facets, which are given in table B.2. \item $h\geq\frac{1}{3}$, $\ell=0$: In this case, measurement dependent local correlations can reproduce all (2,2,2) nonsignaling correlations\footnote{This can be seen by explicitely writing the nonsignaling vertices as a convex combination of MDL vertices. Intuitively, $\ell=0$ means that in each run, there is only one input for which Alice does not know Bob's input and vice-versa. In such a case local correlations can reproduce all nonsignaling correlations.} (cf section \ref{lnsmdl}), and thereby all (2,2,2) quantum correlations. It is therefore not interesting to analyze this case for our purposes. \end{itemize}
There are of course other possible ranges of values for $\ell$ and $h$. However it follows from the definition of the MDL-correlations that for $\ell'\leq\ell$ and $h'\geq h$ we have that $\mathcal{MDL}(\ell,h)\subset\mathcal{MDL}(\ell',h')$. As long as if we only wish to prove non-membership for a given $(\ell,h)$ we can thus just check for non-membership of $\mathcal{MDL}(\ell,1-3\ell)$.\\
As already mentioned, all pure partially entangled bipartite states can violate inequality (\ref{goldenineq}) $\forall\ell>0$. They can thus produce correlations which cannot be reproduced by any MDL model with $\ell>0$. Part of the motivation for this complete analysis stems from the fact that the maximally entangled two-qubit state \begin{align} \ket{\Psi}=\frac{1}{\sqrt{2}}\big(\ket{00}+\ket{11}) \end{align} seems to be singled out. This could have been due to our choice of inequality. However, we can now conduct numerical searches for all the inequalities of the polytope for a fixed value of $\ell$ and $h=1-3\ell$\footnote{Note that while we can only conjecture that the families of facets we found are complete for the whole range of parameters considered, we can verify that for fixed values of the parameters they are indeed complete.} for correlations arising from projective measurements onto the maximally entangled state.
We conducted this numerical search and found that, surprisingly, the maximally entangled state, which leads to the largest violation of the standard locality inequalities in the $(2,2,2)$ scenario, does not seem to allow for a violation of any MDL-inequality for $\ell\leq 0.14$ and $h=1-3\ell$. We conjecture therefore that there exists an MDL-model with $\ell=0.14$ that reproduces the correlations of the maximally entangled 2-qubit state in the case of binary inputs and outputs. For nonlocality experiments and applications taking measurement dependence into account it is therefore recommendable to use partially entangled states.\\
\section{More general scenarios} \label{secgeneral} In this section, we consider more general scenarios, meaning scenarios with more than 2 parties or with additional inputs and outputs. Due to the computational complexity, we were not able to find the full set of facets for such scenarios. However, we introduce a procedure to transform any Bell inequality\footnote{Meaning an inequality for the set of local correlations.} into a valid MDL inequality. This allows one to easily use the vast amount of inequalities which have been derived for the local polytope to immediately derive valid inequalities for the MDL polytope. In addition, we present one inequality which is not of this form and which can be violated even for $\ell=0$, something which is impossible in the $(2,2,2)$ case.
\subsection{Transforming any Bell inequality into an MDL inequality} \label{belltomdl} Consider a general Bell inequality, holding for all local correlations, i.e. correlations of the form (\ref{local}), given by \begin{align} \label{generalbellineq}
\sum_{\{a_i\}_{i=1}^N,\{x_i\}_{i=1}^N}\beta_{\{a_i\}}^{\{x_i\}}p(\{a_i\}_{i=1}^N|\{x_i\}_{i=1}^N)\leq B, \end{align} with $\beta_{\{a_i\}}^{\{x_i\}}\in\mathbb{R}$. Then we can derive an MDL inequality in the following way. Let $p_{\{A_i\}_{i=1}^N,\{X_i\}_{i=1}^N}$ be an MDL correlation. Then
\begin{align*} &\ell\sum_{\beta_{\{a_i\}}^{\{x_i\}}\geq 0}\beta_{\{a_i\}}^{\{x_i\}}p(\{a_i\}_{i=1}^N,\{x_i\}_{i=1}^N)+h\sum_{\beta_{\{a_i\}}^{\{x_i\}}< 0}\beta_{\{a_i\}}^{\{x_i\}}p(\{a_i\}_{i=1}^N,\{x_i\}_{i=1}^N)\\
&=\int\mathrm{d}\lambda\rho(\lambda)\Big(\ell\sum_{\beta_{\{a_i\}}^{\{x_i\}}\geq 0}\beta_{\{a_i\}}^{\{x_i\}}p(\{x_i\}_{i=1}^N|\lambda) \prod_ip(a_i|x_i\lambda)+h\sum_{\beta_{\{a_i\}}^{\{x_i\}}<0}\beta_{\{a_i\}}^{\{x_i\}}p(\{x_i\}_{i=1}^N|\lambda) \prod_ip(a_i|x_i\lambda)\Big)\\
&\leq \ell h\sum_{\{a_i\}_{i=1}^N,\{x_i\}_{i=1}^N}\beta_{\{a_i\}}^{\{x_i\}}\int\mathrm{d}\lambda\rho(\lambda)\prod_ip(a_i|x_i\lambda)\\ &\leq \ell h B. \end{align*}
To get to the second line we simply used the definition of an MDL correlation. The third line follows from $\ell\leq p(\{x_i\}_{i=1}^N|\lambda)\leq h$, using the lower bound for the terms with positive $\beta_{\{a_i\}}^{\{x_i\}}$ and the upper bound for the terms with negative $\beta_{\{a_i\}}^{\{x_i\}}$. Finally the last line follows from the fact that $\int\mathrm{d}\lambda\rho(\lambda)\prod_ip(a_i|x_i\lambda)$ is by definition a local distribution (\ref{local}) and therefore respects by assumption the Bell inequality (\ref{generalbellineq}).\\
We can use this to generate MDL inequalities from all known Bell inequalities. In fact, inequality (\ref{goldenineq}) could have been derived this way by starting from the Eberhard inequality (\ref{Eberhard}). Since Bell inequalities have been a focus of research for several decades, inequalities have been found in many scenarios and for many purposes. All of these inequalities can, using the technique demonstrated here, directly be transformed into valid MDL inequalities. Even further, since local correlations are nonsignaling (\ref{nspolytope}) and thereby respect equalities of the form \begin{align} \label{NScond}
\sum_{a_j}p(\{a_i\}_{i=1}^N|x_1\ldots x_j\ldots x_N)=\sum_{a_j}p(\{a_i\}_{i=1}^N|x_1\ldots x_j'\ldots x_N)\quad \forall x_j,x_j',\{x_i\}_{i\neq j} \end{align} one can write Bell inequalities in many different forms equivalent under the nonsignaling constraints. For example, the Eberhard inequality (\ref{Eberhard}) is equivalent to the CHSH inequality \begin{align} \label{CHSH}
\sum_{abxy}(-1)^{(a\oplus b)\oplus xy}p(ab|xy)\leq 2 \end{align} given that the nonsignaling conditions are satisfied. However, since the MDL polytope contains signaling distributions and thereby does not satisfy the nonsignaling conditions (\ref{NScond}), all of these for local correlations equivalent forms of an inequality lead to different valid MDL inequalities by the procedure described above. Specifically the CHSH inequality (\ref{CHSH}) becomes the MDL inequality \begin{align} \ell\sum_{(a\oplus b)\oplus xy = 0}p(abxy) - h\sum_{(a\oplus b)\oplus xy = 1}p(abxy)\leq 2\ell h. \end{align}
Of course, by construction, MDL inequalities derived in this manner can only be useful if $\ell>0$, otherwise they are trivially satisfied by all correlations. In the $(2,2,2)$ scenario this may be fine since the $\ell=0$ case is not of interest anyway. In more general scenarios however, there can be MDL inequalities which are of interest even for $\ell=0$. We mention such an example in the next subsection.\\
There is another way to use Bell inequalities in an MDL scenario: one can simply check what value MDL correlations can achieve for a given Bell inequality and fixed $\ell$ and $h$. Since Bell inequalities are linear, the largest value of a Bell inequality is realized by one of the vertices and it is therefore sufficient to check the values of the Bell inequality for all the MDL vertices. For the case of $h=1-3\ell$ and the CHSH inequality (\ref{CHSH}), we find\footnote{Note that the bound is not actually smaller than the local bound of 2. The switch from conditional probabilities $p(ab|xy)$ to full probabilities $p(abxy)$ involves multiplying by $p(xy)$.} \begin{align} \sum_{abxy}(-1)^{(a\oplus b)\oplus xy}p_{MDL}(abxy)\leq 1-2\ell. \end{align} This value can even be achieved by a mix of the vertices such that $p(xy)=\frac{1}{4}$, meaning that the inputs appear to be fully random. For this case, the best quantum value is given by~\cite{Tsirelson} \begin{align} \sum_{abxy}(-1)^{(a\oplus b)\oplus xy}p_{Q}(abxy)\leq \frac{1}{\sqrt{2}}. \end{align} This implies that for $\ell\leq \frac{2-\sqrt{2}}{4}$, quantum mechanical correlations can no longer violate this inequality. It is therefore clearly suboptimal. This shows the usefulness of the MDL polytope when compared to using Bell inequalities for the standard local polytope with an adjusted bound.
\subsection{The (2,n,2) scenario} One of the major points for the (2,2,2) scenario is that MDL correlations with $\ell=0$ can reproduce all quantum and nonsignaling correlations. The question remained whether $\ell>0$ is a necessary assumption to be able to show quantum measurement dependent nonlocality in general. Adding additional inputs to each party allows us to answer this question in the negative: it is possible to violate MDL inequalities even for $\ell=0$. Consider the (2,n,2) scenario. As in the (2,2,2) scenario, we call the two parties Alice and Bob and refer to their inputs and outputs by $x$ and $y$ and $a$ and $b$ respectively. We can then construct the following inequality, which holds for all MDL correlations $p_{ABXY}$: \begin{align} \label{nmineq} \big(1-(n^2-n+1)h\big)p(0000)-h\sum_{i=1}^{n-1}\big(p(10i0)+p(010i)+p(00ii)\big)\leq 0. \end{align}
We are using inequality (\ref{goldenineq}) several times between input $0$ and input $i$ for all $i\geq 1$. The factor $(1-(n^2-n+1)h$ makes the inequality trivially satisfied if $h\geq\frac{1}{n^2-n+1}$. If $h<\frac{1}{n^2-n+1}$, then for every $\lambda$ at most $n-2$ inputpairs can be excluded, i.e. have 0 probability due to the normalization of probabilities (\ref{normalization}). We are therefore guaranteed that $\exists i\in\{1\ldots n\}$ such that $p(xy|\lambda)\geq 1-(n^2-n+1)h>0$ for $xy=(0i),(i0),(ii)$. We can then restrict ourselves to these two inputs ($0$ and $i$) and the analysis reduces to the case of $\ell= 1-(n^2-n+1)h>0$ for binary inputs, in which case the inequality corresponds to (\ref{goldenineq}). Every $\lambda$ may have a different $i$ for which this analysis can be done, but the inequality will still hold.\\
This generalized inequality can be violated by quantum mechanics, using the same state and measurements as for inequality (\ref{goldenineq}) and simply doing the same measurement $\forall i\in\{1\ldots n\}$. Of course taking the inputs to be different even though the same measurements are made may be questionable, but it nonetheless shows that $\ell>0$ is not a necessary condition for quantum mechanics to be measurement dependent nonlocal.\\
\subsection{The (N,2,2) scenario} Let us turn our attention towards the case of $N$ parties, with binary inputs and outputs for each party. Here we will employ the procedure presented in section \ref{belltomdl}. We start with the Bell inequality \begin{align} \label{Nbellineq}
p(0\ldots 0|0\ldots 0)-\sum_{j=1}^N p(0\ldots 0\underbrace{1}_j0\ldots 0|0\ldots 0\underbrace{1}_j0\ldots 0) -p(0\ldots 0|1\ldots 1)\leq 0. \end{align}
Let us briefly show that this inequality holds for all local correlations. Since it is a linear inequality and the local set forms a polytope, its maximal value will be realized by a vertex. The vertices of the local polytope have the properties that they factorize, i.e. $p_{\{A_i\}|\{X_i\}}=\prod_i p_{A_i|X_i}$ and that they are deterministic, i.e. $p(\{a_i\}|\{x_i\})\in \{0,1\}$. To violate the inequality, they therefore have to fulfil \begin{align*}
\prod_{i=1}^Np_{A_i|X_i}(0|0)&=1\\
p_{A_j|X_j}(1|1)\prod_{i\neq j}p_{A_i|X_i}(0|0)&=0\quad\forall j\in\{1\ldots N\}\\
\prod_{i=1}^Np_{A_i|X_i}(0|1)&=0. \end{align*}
From the first line it follows that $p_{A_i|X_i}(0|0)=1$ $\forall i$. With this, the second line implies that $p_{A_i|X_i}(1|1)=0$ $\forall i$, which due to the normalization of probabilities implies that $p_{A_i|X_i}(0|1)=1$ $\forall i$. Given this however it follows that $\prod p_{A_i|X_i}(0|1)=1$, which contradicts the third condition. Therefore none of the vertices of the local polytope can violate the linear inequality (\ref{Nbellineq}), implying that no local distribution can.\\
We can now use the procedure shown in section \ref{belltomdl}. We arrive at the MDL inequality \begin{align}
\ell p(0\ldots 0|0\ldots 0)-h\Big(\sum_{j=1}^N p(0\ldots 0\underbrace{1}_j0\ldots 0|0\ldots 0\underbrace{1}_j0\ldots 0) -p(0\ldots 0|1\ldots 1)\Big)\leq 0. \end{align}
This inequality can be violated $\forall N$, $\forall \ell>0$ and $\forall h$ by the state \begin{align} \ket{\Psi_N}=\frac{1}{\sqrt{N+1}}\big(\ket{0\ldots 01}+\ldots+\ket{10\ldots 0}-\ket{1\ldots 1}\big) \end{align} if all $N$ parties perform measurements in the basis $\big\{\frac{\ket{0}+\ket{1}}{\sqrt{2}},\frac{\ket{0}-\ket{1}}{\sqrt{2}}\big\}$ for input 0 and in the basis $\{\ket{0},\ket{1}\}$ for input 1. The left-hand side of inequality (\ref{Nbellineq}) evaluates, for these correlations, to $\frac{(N-1)^2}{(N+1)2^N}$.
\section{Independent sources} \label{secindsources} Let us reconsider the MDL-scenario as it is depicted in figure \ref{figmdlscenario}. An intuitive assumption can be seen which we have not imposed so far: any dependence between the inputs is due to the common past $\Lambda$. Mathematically, this means that conditioned on $\Lambda$, the input distribution factorizes: \begin{align} \label{indsource}
p_{XY|\Lambda}=p_{X|\Lambda}p_{Y|\Lambda}. \end{align} This is a very natural assumption, especially when we consider the common past to be in the hands of an adversary. We call eq. (\ref{indsource}) the \textit{independent source} assumption. For the more general scenario of $N$ parties, it is given by \begin{align}
p_{\{X_i\}|\Lambda}=\prod_{i=1}^N p_{X_i|\Lambda}. \end{align} The measurement dependence assumption is then given by \begin{align} \label{mdlis}
\ell_i\leq p(x_i|\lambda)\leq h_i\quad\forall i,x_i,\lambda. \end{align}
Notice that we allow for different bounds being imposed for the different parties. The set of possible $p_{X_i|\Lambda}$ forms a polytope since only linear inequalities are imposed, and we can therefore use the polytope theorem \ref{polytopethm} to conclude that the resulting set of possible input distributions forms again a polytope. As a consequence, the set of independent source measurement dependent local correlations is also a polytope and can be described by a finite set of linear inequalities. For the case of 2 parties with binary inputs and outputs, we give a list of facets in the appendix, which, as in section \ref{222}, we conjecture to be complete. Note that if the input of party $i$ is binary, it follows that $\ell_i=1-h_i$ due to normalization.\\
It is worth to note that for given $\{\ell_i\}_i$ and $\{h_i\}_i$, the set of independent sources MDL distributions is included in the set of MDL correlations with $\ell=\prod_i\ell_i$ and $h=\prod_i h_i$. In the case of 2 parties with binary inputs and outputs, all pure partially entangled quantum states thus allow for correlations which cannot be reproduced by independent sources MDL distributions if $\ell_i>0$ $\forall i$. We investigated whether with this additional assumption 2-qubit maximally entangled states could produce such correlations as well. However our numerical optimizations yielded no such results, and we conjecture that for some critical $\ell_i>0$ independent sources MDL resources can reproduce all binary input and outcome correlations of the maximally entangled 2-qubit state.
\section{Measurement dependence and signaling} We noted at several points that measurement dependent correlations are not nonsignaling. By that we mean that a measurement dependent correlation does not necessarily satisfy the nonsignaling conditions\footnote{For simplicity, we only talk about the bipartite scenario here.} \begin{align}
p(a|xy)=p(a|xy')\quad\forall a,x,y,y', \end{align} and similarly for Bob. If these conditions are not satisfied, then it is possible for one party to gain information about the input of the other by only looking at their own input and output. To see that measurement dependent local correlations do not necessarily satisfy the nonsignaling conditions, it suffices to consider the definition of the marginals: \begin{align*}
p(a|xy)&=\frac{\sum_bp(abxy)}{p(xy)}\\
&\int\mathrm{d}\lambda\rho(\lambda)\frac{p(xy|\lambda)}{p(xy)}p(a|x\lambda), \end{align*} which in general is not independent of $y$. Usually, one would say that this allows for communication that is faster than light or without physical support. We would like to stress that this conclusion does not hold here. In fact, reconsidering the definition of measurement dependent local correlations, it becomes clear that by definition, if $\lambda$ is known, then the nonsignaling conditions are satisfied, i.e. \begin{align}
p(a|xy\lambda)=p(a|xy'\lambda)\quad\forall a,x,y,y',\lambda. \end{align} It is thus evident that no faster than light communication is happening here. Instead it can simply be seen as the common source $\Lambda$ influencing and potentially correlating the four random variables $A, B, X$ and $Y$. This is an example of a case where the illusion of faster than light communication arises due to ignorance, in this case of the local hidden variable $\Lambda$. The same conclusion holds for the case of independent sources described in section \ref{secindsources}.\\
If one is certain however that the nonsignaling conditions should be fulfilled for some observed correlations, then our characterization of the MDL polytope can still easily be used for the analysis. To do so, note that adding linear equality constraints to the constraints of a polytope results in a new polytope. Geometrically, it corresponds to cutting a polytope with the hyperplane defined by the additional linear equality constraints. Given that the facets, i.e. the hyperplanes describing the original polytope, are known, finding the description of the new polytope corresponds to computing the intersection of the facets with the constraint-hyperplane: a case of variable elimination. In this way, one gets the facets of the new polytope with additional superfluous hyperplanes, which can be eliminated by a linear program.
We performed this operation to get the nonsignaling measurement dependent local polytope for the case of bipartite correlations with binary inputs and outputs and $\frac{1}{4}< h<\frac{1}{3}$, $\ell=1-3h$ in our previous work~\cite{Putz14}. The total number of facets reduced drastically from 93 to 8.
\section{Limited detection locality} \label{LDL} Let us mention another way in which local resources can reproduce correlations that are seemingly nonlocal. Consider an experiment in which the particles used to establish the correlations can be lost. One may, in such a case, want to simply ignore the cases in which such a loss occurred, i.e. postselect on having a detection for all the involved parties. This however opens up the detection loophole. What could happen in fact is that the particles have the hidden common strategy to simply not give a detection if they do not like the input they see once they arrive in the respective measurement apparatus. If these cases are simply ignored, then this corresponds to the experimenters letting the particles choose which input they reply to, which leads us back to the problem of measurement dependence.\\
The usual way to deal with this issue is to either have a sufficiently large overall efficiency, such that this cheating strategy can be excluded, or to simply ignore it. Given that the necessary detection efficiencies are usually very high, this seems very unsatisfactory. We recently introduced the assumption of \textit{limited detection efficiency}\cite{Putz15}. It corresponds to assuming that in each run the detection probability cannot be fully influenced by the common hidden strategy. In other words, if we denote by $\varnothing$ a nondetection event, then we assume that \begin{align}
\eta_{min}\leq p(A_i\neq\varnothing|x_i\lambda)\leq\eta_{max} \end{align} for a fixed $[\eta_{min},\eta_{max}]\subsetneqq [0,1]$. Amongst the interesting results is the fact that the set of correlations realizable by postselected local distributions with this assumption forms a polytope. We do not discuss these results here and instead refer to \cite{Putz15}.\\
For the purposes of this paper, it is however interesting that the connection between limited detection and measurement dependence, as intuitively described above, can be made mathematically explicit. In fact, it turns out that any set of postselected limited detection local distributions can also be realized by measurement dependent local distributions and even further, we can analyze them both together. This is captured by the following theorem: \begin{thm} Let
\begin{align*}\mathcal{P}(\ell,h,\eta_{min},\eta_{max})=\Big\{p :& p(\{a_i\}_{i=1}^N,\{x_i\}_{i=1}^N)=\int\mathrm{d}\lambda\rho(\lambda)p(\{x_i\}_{i=1}^N|\lambda)\prod_{i=1}^N p(a_i|x_i\lambda)\\
&\ell\leq p(\{x_i\}_{i=1}^N|\lambda)\leq h\quad\forall x,y,lambda\\
&\eta_{min}\leq p(a_i\neq\varnothing|x_i\lambda)\leq\eta_{max}\quad\forall i,a_i,x_i,\lambda\Big\} \end{align*} and \begin{align*}
\mathcal{PS}(\ell,h,\eta_{min},\eta_{max})=\Big\{q : \exists p\in\mathcal{P}(\ell,h,\eta_{min},\eta_{max})\text{ s.t. }q_{\{A_i\},\{X_i\}}=p_{\{A_i\},\{X_i\}|A_i\in\{1\ldots m_i\}\forall i}\Big\} \end{align*} Then $\mathcal{PS}(\ell,h,\eta_{min},\eta_{max})\subset \mathcal{MDL}\Big(\ell\big(\frac{\eta_{min}}{\eta_{max}}\big)^2,h\big(\frac{\eta_{max}}{\eta_{min}}\big)^2\Big)$. \end{thm}
The proof of this theorem can be found in \cite{Putz15}. If we want to guarantee that a certain correlation cannot be replicated by measurement dependent and limited detection local resources, we can instead focus only on measurement dependent resources with adjusted parameters $\ell'=\ell\big(\frac{\eta_{min}}{\eta_{max}}\big)^2$ and $h'=h\big(\frac{\eta_{max}}{\eta_{min}}\big)^2$. Thereby all the results introduced in this work can be applied.
\section{Conclusion} Bell-locality, one of the most fascinating features of quantum mechanics and the core concept behind the framework of device independent quantum information processing, includes the untestable assumption of measurement independence. We relaxed this assumption and found that the correlations we get with our parameterized relaxation, which we refer to as measurement dependent local correlations, form a convex polytope, a geometric structure that can be fully characterized by a finite number of linear inequalities. We found the complete set of inequalities for the case of bipartite correlations with binary inputs and outputs. Using one of these inequalities, we found that, surprisingly, all bipartite pure states except for the seperable states and the maximally entangled state can exhibit nonlocality for arbitrarily large measurement dependence, i.e. an arbitrarily small amount of free choice. This is again an example of nonlocality and entanglement being not in one-to-one correspondence~\cite{Brunner05}.
Furthermore, we presented a method allowing us to transform any Bell inequality into a valid MDL inequality, allowing us to use all of the already established Bell inequalities for our purposes. Using this we gave a family of MDL inequalities for $N$ parties with binary inputs and outputs. We also presented a family of bipartite inequalities for arbitrary number of inputs, with which we could show that our measurement dependence assumption can be relaxed even further while still allowing for quantum violations.
We presented the additional assumption of independent sources, which in the given framework comes very natural. We showed that this case can be fully described using linear inequalities as well and gave all the inequalities for the bipartite case with binary inputs and outputs.
Finally, we considered the more realistic scenario of a lossy experiment or implementation. We introduced the concept of limited detection locality. Interestingly, there exists a direct link between this limited detection scenario and the measurement dependence scenario.
Several open questions remain, which we exhibit here for potential future research. First, the definition of measurement dependence leads to considerations of randomness amplification or extraction. It would be of interest to see if the measurement dependent local inequalities presented here can be used to certify randomness, potentially in the bipartite case with binary inputs and outputs.
Second, it would be of interest to see whether the set of measurement dependent local distributions is closed under so-called wirings, i.e. when the outputs of one setup are used as the inputs for another.
Finally, the current framework assumes that the probability distribution describing a run of the experiment is known. In a real experiment, this is accomplished by checking frequencies over multiple runs. Our results apply straightforwardly if we assume that we do not need to take memory effects into account, i.e. that the runs are independent and identically distributed. Taking such memory effects into account is a topic we are currently working on and will display in future publications.
\section{Acknowledgments} We acknowledge Tomer Barnea and Denis Rosset for disccusions and financial support of the European projects CHIST-ERA DIQIP and SIQS.
\appendix
\section{Proof of theorem 1} \begin{thm*} Let $\mathcal{P}\subset\mathbb{R}^{m'n}$ and $\mathcal{Q}\subset\mathbb{R}^{mn}$ be two polytopes. Let the components of the vectors of $\mathcal{P}$ be labelled by $p(k,l)$ and the ones of $\mathcal{Q}$ by $q(k,l')$ where $k=1\ldots n$, $l=1\ldots m$ and $l'=1\ldots m'$. Let \begin{align*} \mathcal{R}=\big\{r\in\mathbb{R}^{mm'n} : &\exists\Lambda\text{ measureable set},\\ &\exists\rho:\Lambda\rightarrow [0,1]\text{ with }\rho(\lambda)\geq 0\forall\lambda\in\Lambda,\int_\Lambda\mathrm{d}\lambda\rho(\lambda)=1\\ &\exists \{p_\lambda\}_{\lambda\in\Lambda}\subset\mathcal{P}, \quad\exists \{q_{\lambda}\}_{\lambda\in\Lambda}\subset\mathcal{Q} \text{ s.t.}\\ &r(k,l,l')=\int\mathrm{d}\lambda\rho(\lambda)p_\lambda(k,l)q_{\lambda}(k,l')\quad \forall k\in\{1\ldots n\},l\in\{1\ldots m\},l'\in\{1\ldots m'\}\big\}. \end{align*} Let $\mathcal{V_P}$ and $\mathcal{V_Q}$ be the sets of vertices of $\mathcal{P}$ and $\mathcal{Q}$ respectively.\\ Then $\mathcal{R}$ is a polytope and its set of vertices $\mathcal{V_R}$ fulfils \begin{align*}
\mathcal{V_R}\subset\big\{v_\mathcal{R}^{ij} : v_\mathcal{R}^{ij}(k,l,l')=v_\mathcal{P}^i(k,l)v_\mathcal{Q}^j(k,l')\quad i\in\{1\ldots |\mathcal{V_P}|\},j\in\{1\ldots |\mathcal{V_Q}|\}\big\}. \end{align*} \end{thm*}
\textbf{Proof:} To prove the theorem, we show the following 3 steps: \begin{enumerate} \item $\mathcal{R}$ is convex. \item $\mathcal{V_R}\subset\mathcal{R}$ \item $\mathcal{R}\subset\text{Conv}(\mathcal{V_R})$, where $\text{Conv}$ denotes the convex hull. \end{enumerate}
\textit{Step 1: }Let $r_1,r_2\in\mathcal{R}$, $\alpha\geq 0$ and $r=\alpha r_1+(1-\alpha)r_2$. Then \begin{align*} r(k,l,l')&=\alpha r_1(k,l,l')+(1-\alpha)r_2(k,l,l')\\ &=\int_{\Lambda_1}\mathrm{d}\lambda_1\alpha\rho_1(\lambda_1)p^1_{\lambda_1}(k,l)q^1_{\lambda_1}(k,l')+\int_{\Lambda_2}\mathrm{d}\lambda_2(1-\alpha)\rho_2(\lambda_2)p^2_{\lambda_2}(k,l)q^2_{\lambda_2}(k,l')\\ &=\int_\Lambda\mathrm{d}\lambda\rho(\lambda)p_\lambda(k,l)q_\lambda(k,l'), \end{align*} where we define $\Lambda=\Lambda_1\times\Lambda_2\times\{1,2\}$ and $\rho : \Lambda=\Lambda_1\times\Lambda_2\times\{1,2\}\rightarrow \mathbb{R}^+$ with \begin{align*} \int_{\Lambda_2}\mathrm{d}\lambda_2\rho(\lambda_1,\lambda_2,1)&=\alpha\rho_1(\lambda_1)\\ \int_{\Lambda_1}\mathrm{d}\lambda_1\rho(\lambda_1,\lambda_2,2)&=(1-\alpha)\rho_2(\lambda_2)\\ p_{(\lambda_1,\lambda_2,n)}&=p^n_{\lambda_n}\\ q_{(\lambda_1,\lambda_2,n)}&=q^n_{\lambda_n}\\ \end{align*} Note that $\int_\Lambda\mathrm{d}\lambda\rho(\lambda)=1$, $p_\lambda\in\mathcal{P}$, $q_\lambda\in\mathcal{Q}$ $\forall\lambda\in\Lambda$. We therefore conclude that $r\in\mathcal{R}$ and thus that $\mathcal{R}$ is convex.\\
\textit{Step 2: }Since $\mathcal{V_P}\subset\mathcal{P}$ and $\mathcal{V_Q}\subset\mathcal{Q}$, we conclude that by definition $\mathcal{V_R}\subset\mathcal{R}$.\\
\textit{Step 3: }Let $r\in\mathcal{R}$. Then \begin{align*} r(k,l,l')&=\int_\Lambda\mathrm{d}\lambda\rho(\lambda)p_\lambda(k,l)q_\lambda(k,l')\\ &=\int_\Lambda\mathrm{d}\lambda\rho(\lambda)\sum_i\alpha^i_\lambda v_\mathcal{P}^i\sum_j\beta^j_\lambda v_\mathcal{Q}^j\\ &=\sum_{i,j}\Big(\int_\Lambda\mathrm{d}\lambda\rho(\lambda)\alpha^i_\lambda\beta^j_\lambda\Big)v_\mathcal{P}^iv_\mathcal{Q}^j\\ &=\sum_{i,j}\gamma^{ij}v_\mathcal{R}^{ij}. \end{align*} In the first line we use the definition of $\mathcal{R}$, in the second line we use that $\mathcal{P}$ and $\mathcal{Q}$ are polytopes and thus that we can express their elements as convex combinations of their vertices. Note that therefore $\sum_i\alpha_\lambda^i=1$ $\forall\lambda$, $\sum_j\beta_\lambda^j=1$ $\forall\lambda$ and $\alpha_\lambda^i,\beta_\lambda^j\geq 0$ $\forall i,j,\lambda$. In the last line we defined $\gamma^{ij}=\int_\Lambda\mathrm{d}\lambda\rho(\lambda)\alpha^i_\lambda\beta^j_\lambda$. Note that $\gamma^{ij}\geq 0$ and $\sum_{i,j}\gamma^{ij}=1$.
We can thus write any element of $\mathcal{R}$ as a convex combination of elements of $\mathcal{V_R}$ and therefore $\mathcal{R}\subset\text{Conv}(\mathcal{V_R})$.\\
This concludes the proof.
\section{Full list of facets for the (2,2,2) scenario and some ranges of l and h} In this section we give the full list of facets for the cases of $0<\ell<\frac{1}{4}$, $h=1-3\ell$, corresponding to the case where only a nontrivial lower-bound is imposed, as well as $\frac{1}{4}<h<\frac{1}{3}$, $\ell=1-3h$, where only an upper bound is imposed. The tables contain the $\beta_{ab}^{xy}$ such that $\sum_{abxy}\beta_{ab}^{xy}p(abxy)\leq 0$ for all MDL($\ell,h$)-correlations $p_{ABXY}$.
\begin{sidewaystable}
\resizebox*{\textwidth}{!}{\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
&$\beta_{00}^{00}$&$\beta_{10}^{00}$&$\beta_{01}^{00}$&$\beta_{11}^{00}$&$\beta_{00}^{10}$&$\beta_{10}^{10}$&$\beta_{01}^{10}$&$\beta_{11}^{10}$&$\beta_{00}^{01}$&$\beta_{10}^{01}$&$\beta_{01}^{01}$&$\beta_{11}^{01}$&$\beta_{00}^{11}$&$\beta_{10}^{11}$&$\beta_{01}^{11}$&$\beta_{11}^{11}$\\ \hline
1&$\frac{\ell (9 \ell-7)+2}{9 (\ell-1) \ell+2}$&$1+\frac{1}{3\ell-2}$&$1$&$-\frac{1}{\ell}+1+\frac{2}{2-3 \ell}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (3\ell-2)}$&$1+\frac{1}{3 \ell-2}$&$1$&$\frac{\ell (9 \ell-8)+2}{9 (\ell-1)\ell+2}$&$\frac{\ell-1}{\ell}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (3\ell-2)}$&$-\frac{1}{\ell}+1+\frac{1}{2-3 \ell}$&$1+\frac{1}{3 \ell-2}$&$1$&$1+\frac{1}{3\ell-2}$&$1+\frac{1}{3 \ell-2}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (3 \ell-2)} $\\
2&$\frac{\ell (9 \ell-8)+2}{9 (\ell-1) \ell+2}$&$1+\frac{1}{3 \ell-2}$&$1$&$\frac{(\ell-1) (3\ell-1)}{\ell (3 \ell-2)}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (3 \ell-2)}$&$1+\frac{1}{3\ell-2}$&$1+\frac{1}{3 \ell-2}$&$1$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{2}{2-3\ell}$&$-\frac{1}{\ell}+1+\frac{2}{2-3 \ell}$&$1+\frac{1}{3 \ell-2}$&$\frac{\ell (9 \ell-8)+2}{9(\ell-1) \ell+2}$&$1$&$1+\frac{1}{3 \ell-2}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (3\ell-2)} $\\
3&$-\frac{\ell (\ell (9 \ell-8)+2)}{(1-3 \ell)^2(\ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell (3 \ell-2)}{(\ell-1) (3\ell-1)}$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell (3 \ell-2)}{(\ell-1)(3 \ell-1)}$&$-\frac{3 (\ell-2) \ell+2}{(\ell-1) (3\ell-1)}$&$-1$&$-\frac{\ell-2}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell (3 \ell-2)}{(\ell-1)(3 \ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$\\
4&$\frac{\ell (9 \ell-8)+2}{(1-3 \ell)^2}$&$1$&$1+\frac{1}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$1$&$1$&$1+\frac{1}{1-3\ell}$&$\frac{\ell-2}{\ell}$&$1$&$\frac{1}{3\ell-1}+1-\frac{1}{\ell}$&$1$&$1$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-3\ell}$&$\frac{\ell-1}{\ell} $\\
5&$1+\frac{1}{1-3 \ell}$&$1$&$1$&$\frac{\ell-1}{\ell}$&$\frac{1}{3\ell-1}+1-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$1$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-3 \ell}$&$\frac{\ell-1}{\ell}$&$1$&$1+\frac{1}{1-3\ell}$&$1$&$\frac{\ell (9 \ell-5)+1}{(1-3 \ell)^2}$&$1 $\\
6&$\frac{\ell}{(\ell-3) \ell+1}+1$&$\frac{\ell^2}{(\ell-3) \ell+1}$&$1$&$\frac{\ell((\ell-9) \ell+6)-1}{\ell ((\ell-3) \ell+1)}$&$\frac{(\ell-1) \ell}{(\ell-3)\ell+1}$&$\frac{\ell^2}{(\ell-3) \ell+1}$&$\frac{\ell^2}{(\ell-3)\ell+1}$&$1$&$\frac{\ell-1}{\ell}$&$\frac{(\ell-5) (\ell-1) \ell-1}{\ell ((\ell-3)\ell+1)}$&$\frac{(\ell-5) (\ell-1) \ell-1}{\ell ((\ell-3)\ell+1)}$&$\frac{\ell^2}{(\ell-3) \ell+1}$&$\frac{\ell}{(\ell-3)\ell+1}+1$&$1$&$\frac{\ell^2}{(\ell-3) \ell+1}$&$\frac{(\ell-1) \ell}{(\ell-3)\ell+1} $\\
7&$\frac{\ell-1}{3 \ell-1}$&$0$&$1$&$-\frac{1-3 \ell}{\ell-2\ell^2}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$0$&$\frac{(\ell-3) \ell+1}{\ell (2 \ell-1)}$&$\frac{1-3\ell}{1-2 \ell}$&$1$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$\frac{(1-3 \ell)^2}{\ell (2\ell-1)} $\\
8&$\frac{(\ell-1) (2 \ell-1)}{(1-3 \ell)^2}$&$0$&$\frac{\ell-1}{3\ell-1}$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{\ell-1}{3\ell-1}$&$-\frac{\ell+1}{\ell}$&$1$&$\frac{(\ell-1) (2 \ell-1)}{\ell (3\ell-1)}$&$0$&$1$&$3-\frac{1}{\ell}$&$\frac{2 \ell}{1-3 \ell}$&$0 $\\
9&$\frac{(\ell-3) \ell+1}{(1-3 \ell)^2}$&$0$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{\ell-1}{3 \ell-1}$&$\frac{(\ell-3)\ell+1}{\ell (3 \ell-1)}$&$0$&$-\frac{1}{\ell}$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell} $\\
10&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$2-\frac{1}{\ell}$&$0$&$\frac{(\ell-1) (3\ell-1)}{\ell (2 \ell-1)}$&$\frac{1-3 \ell}{1-2 \ell}$&$1$&$0$&$\frac{1-3 \ell}{1-2\ell}$&$\frac{(1-3 \ell)^2}{\ell (2 \ell-1)} $\\
11&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$\frac{(1-3 \ell)^2}{\ell (2\ell-1)}$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$1$&$\frac{(\ell-3) \ell+1}{\ell (2 \ell-1)}$&$\frac{(1-3\ell)^2}{\ell (2 \ell-1)}$&$-\frac{1-3 \ell}{\ell-2 \ell^2}$&$0$&$1$&$\frac{1-3 \ell}{1-2\ell}$&$\frac{1-3 \ell}{1-2 \ell}$&$0 $\\
12&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$\frac{(1-3 \ell)^2}{\ell (2\ell-1)}$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$1$&$-\frac{1-3 \ell}{\ell-2\ell^2}$&$0$&$2-\frac{1}{\ell}$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$0$&$\frac{\ell}{1-2 \ell}$&$0 $\\
13&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$\frac{(1-3 \ell)^2}{\ell (2\ell-1)}$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$1$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$-\frac{1-3\ell}{\ell-2 \ell^2}$&$0$&$\frac{\ell (7 \ell-5)+1}{\ell (6 \ell-5)+1}$&$1$&$\frac{1-3\ell}{1-2 \ell}$&$0 $\\
14&$\frac{\ell-1}{3 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{2 \ell}{1-3\ell}$&$\frac{(\ell-1) (2 \ell-1)}{\ell (3\ell-1)}$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$0$&$\frac{\ell (8 \ell-5)+1}{(1-3\ell)^2}$&$\frac{\ell-1}{3 \ell-1}$&$1$&$0 $\\
15&$\frac{\ell-1}{3 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{(\ell-3) \ell+1}{\ell (3\ell-1)}$&$2-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$0$&$\frac{\ell (7 \ell-5)+1}{(1-3\ell)^2}$&$\frac{1-2 \ell}{1-3 \ell}$&$1$&$0 $\\
16&$\frac{\ell-1}{2 \ell-1}$&$0$&$1$&$\frac{5 (\ell-1) \ell+1}{\ell (2\ell-1)}$&$0$&$\frac{\ell^2}{\ell (6 \ell-5)+1}$&$\frac{\ell}{1-2\ell}$&$1$&$\frac{\ell-1}{\ell}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (2\ell-1)}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (2 \ell-1)}$&$0$&$\frac{\ell-1}{2\ell-1}$&$1$&$0$&$\frac{\ell}{2 \ell-1} $\\
17&$\frac{\ell (8 \ell-5)+1}{\ell (6 \ell-5)+1}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$\frac{1-3 \ell}{1-2\ell}$&$1$&$\frac{\ell (7 \ell-5)+1}{\ell (6\ell-5)+1}$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{1-3 \ell}{1-2\ell}$&$0$&$\frac{(1-3 \ell)^2}{\ell (2 \ell-1)} $\\
18&$\frac{\ell (7 \ell-5)+1}{(1-3 \ell)^2}$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$\frac{1-2 \ell}{1-3 \ell}$&$2-\frac{1}{\ell}$&$0$&$\frac{\ell}{3\ell-1}$&$1$&$0$&$2-\frac{1}{\ell}$&$\frac{5 (\ell-1) \ell+1}{\ell (3\ell-1)}$&$\frac{\ell-1}{\ell} $\\
19&$\frac{\ell (7 \ell-5)+1}{\ell (6 \ell-5)+1}$&$0$&$1$&$\frac{(1-3 \ell)^2}{\ell (2\ell-1)}$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$\frac{1-3 \ell}{1-2 \ell}$&$1$&$\frac{(\ell-3)\ell+1}{\ell (2 \ell-1)}$&$3-\frac{1}{\ell}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (2\ell-1)}$&$0$&$1$&$\frac{1-3 \ell}{1-2 \ell}$&$0$&$\frac{(1-3 \ell)^2}{\ell (2 \ell-1)} $\\
20&$\frac{\ell (7 \ell-5)+1}{(1-3 \ell)^2}$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$\frac{1-2 \ell}{1-3 \ell}$&$\frac{(1-2 \ell)^2}{\ell (3\ell-1)}$&$2-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$0$&$2-\frac{1}{\ell} $\\
21&$1$&$0$&$\frac{1-3 \ell}{1-2 \ell}$&$\frac{(1-3 \ell)^2}{\ell (2 \ell-1)}$&$0$&$0$&$\frac{\ell}{1-2\ell}$&$\frac{1-3 \ell}{1-2 \ell}$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{2\ell (5 \ell-3)+1}{\ell (6 \ell-5)+1}$&$0$&$0 $\\
22&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$\frac{\ell}{3\ell-1}$&$0$&$0$&$1$&$\frac{\ell-1}{\ell}$&$\frac{5 (\ell-1) \ell+1}{\ell (3\ell-1)}$&$2-\frac{1}{\ell}$&$0$&$\frac{1-2 \ell}{1-3 \ell}$&$1$&$\frac{\ell^2}{(1-3 \ell)^2}$&$0$\\
23&$-\frac{\ell (3 \ell-2)}{(\ell-1) (3\ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell-2}{\ell-1}$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell (3\ell-2)}{(\ell-1) (3 \ell-1)}$&$-\frac{\ell (3 \ell-2)}{(\ell-1) (3\ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$\\
24&$-\frac{\ell (3 \ell-2)}{(\ell-1) (3\ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1} $\\
25&$-\frac{\ell (3 \ell-2)}{(\ell-1) (3\ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell (3 \ell-2)}{(\ell-1)(3 \ell-1)}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-1$\\
26&$\frac{2 (2 \ell-1)}{3 \ell-1}$&$0$&$1$&$5-\frac{2}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{5 \ell-2}{3\ell-1}$&$4-\frac{2}{\ell}$&$0$&$\frac{\ell-1}{\ell}$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell} $\\
27&$\frac{2 (2 \ell-1)}{3\ell-1}$&$0$&$1$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{\ell-1}{3\ell-1}$&$4-\frac{2}{\ell}$&$0$&$\frac{\ell-1}{\ell}$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell} $\\
28&$\frac{5 \ell-2}{3 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$5-\frac{2}{\ell}$&$0$&$\frac{\ell-1}{\ell}$&$0$&$1$&$0$&$1$&$4-\frac{1}{\ell} $\\
29&$\frac{5 \ell-2}{3 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$5-\frac{2}{\ell}$&$0$&$2-\frac{1}{\ell}$&$1$&$1$&$0$&$0$&$3-\frac{1}{\ell} $\\
30&$\frac{1}{1-3 \ell}$&$0$&$1$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{\ell-1}{3\ell-1}$&$-\frac{1}{\ell}$&$0$&$\frac{\ell-1}{\ell}$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell} $\\
31&$\frac{\ell-1}{2 \ell-1}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$0$&$\frac{\ell}{1-2\ell}$&$1$&$\frac{\ell-1}{\ell}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (2\ell-1)}$&$2-\frac{1}{\ell}$&$0$&$\frac{\ell-1}{2 \ell-1}$&$1$&$0$&$0 $\\
32&$1$&$0$&$2+\frac{1}{\ell-1}$&$\frac{(1-3 \ell)^2}{(\ell-1)\ell}$&$0$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$2+\frac{1}{\ell-1}$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$2+\frac{1}{\ell-1}$&$0$&$\frac{1-3 \ell}{\ell-1} $\\
33&$\frac{\ell-1}{2 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{\ell}{1-2\ell}$&$\frac{\ell-1}{\ell}$&$\frac{(\ell-1) (3 \ell-1)}{\ell (2\ell-1)}$&$\frac{\ell-1}{\ell}$&$0$&$1$&$\frac{\ell-1}{2 \ell-1}$&$1$&$0 $\\
34&$\frac{\ell-1}{3 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$0$&$\frac{\ell-1}{\ell}$&$0$&$1$&$0$&$1$&$0 $\\
35&$\frac{\ell-1}{3 \ell-1}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$1$&$1$&$0$&$0$&$3-\frac{1}{\ell} $\\
36&$\frac{\ell-1}{3 \ell-1}$&$0$&$1$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$0$&$\frac{\ell-1}{\ell}$&$1$&$1$&$0$&$1$&$3-\frac{1}{\ell} $\\
37&$1+\frac{1}{1-2 \ell}$&$1$&$1$&$-\frac{1}{\ell}+1+\frac{1}{1-2\ell}$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-2 \ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-2\ell}$&$1$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-2\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-2 \ell}$&$1$&$1+\frac{1}{1-2 \ell}$&$1$&$1$&$\frac{2 \ell}{2\ell-1} $\\
38&$1+\frac{1}{1-2\ell}$&$1$&$1$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-2 \ell}$&$1$&$\frac{\ell-1}{\ell}$&$-\frac{1}{\ell}+1+\frac{1}{1-2\ell}$&$\frac{\ell-1}{\ell}$&$1$&$1+\frac{1}{1-2 \ell}$&$1$&$1$&$1 $\\
39&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$1$&$2-\frac{1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$0$&$0$&$0$&$0$&$0 $\\
40&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$1$&$\frac{\ell-1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$0$&$1$&$0$&$0$&$0 $\\
41&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$3-\frac{1}{\ell}$&$1$&$2-\frac{1}{\ell}$&$0$&$-1$&$1$&$\frac{1-2\ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell} $\\
42&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$\frac{1-2 \ell}{1-3 \ell}$&$1$&$1$&$0 $\\
\end{tabular}} \end{sidewaystable}
\begin{sidewaystable} \label{h1m3l}
\resizebox*{\textwidth}{0.48\textheight}{\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
43&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$2-\frac{1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$0$&$1$&$0$&$1$&$0$\\
44&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$4-\frac{1}{\ell}$&$0$&$1$&$1$&$2-\frac{1}{\ell}$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$0$&$-1$&$3-\frac{1}{\ell} $\\
45&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$1$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$\frac{1-2 \ell}{1-3\ell}$&$1$&$0$&$3-\frac{1}{\ell} $\\
46&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$1$&$1$&$\frac{1-2 \ell}{1-3\ell}$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$0$&$0$&$3-\frac{1}{\ell}$\\
47&$\frac{1-2 \ell}{1-3\ell}$&$1$&$1$&$0$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$3-\frac{1}{\ell}$&$0$&$-1$&$0$&$1$&$0$&$1$&$4-\frac{1}{\ell} $\\
48&$\frac{1-2 \ell}{1-3\ell}$&$1$&$1$&$0$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$0$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell} $\\
49&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$1$&$\frac{\ell-1}{\ell}$&$1$&$2-\frac{1}{\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$0 $\\
50&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$2-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$0$&$1$&$2-\frac{1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$1$&$1$&$0$&$1$&$3-\frac{1}{\ell} $\\
51&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$1$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$1$&$0$&$0 $\\
52&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$1$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$0$&$1$&$1$&$1$&$0 $\\
53&$\frac{1-2 \ell}{1-3\ell}$&$0$&$1$&$3-\frac{1}{\ell}$&$4-\frac{1}{\ell}$&$0$&$1$&$1$&$\frac{\ell-1}{\ell}$&$-1$&$2-\frac{1}{\ell}$&$0$&$1$&$\frac{\ell}{1-3 \ell}$&$0$&$0 $\\
54&$\frac{1-2 \ell}{1-3 \ell}$&$0$&$1$&$-1$&$0$&$0$&$\frac{\ell}{1-3\ell}$&$1$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$1$&$0$&$4-\frac{1}{\ell}$\\
55&$1$&$4-\frac{1}{\ell}$&$1$&$0$&$0$&$1$&$0$&$\frac{\ell}{1-3\ell}$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$0$&$0$&$-1 $\\
56&$-\frac{\ell}{\ell-1}$&$-1$&$-1$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$\\
57&$-\frac{\ell}{\ell-1}$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$\\
58&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-1$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1}$&$-\frac{\ell}{\ell-1} $\\
59&$1$&$1$&$1$&$1$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$\frac{\ell-1}{\ell}$&$1$&$1$&$1$&$1$&$1$&$1$&$1$&$1 $\\
60&$1$&$\frac{\ell-1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$1$&$0$&$2-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$1$&$1$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$\frac{\ell-1}{\ell} $\\
61&$1$&$3-\frac{1}{\ell}$&$1$&$0$&$1$&$1$&$0$&$1$&$\frac{\ell-1}{\ell}$&$2-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$1$&$0$&$3-\frac{1}{\ell} $\\
62&$1$&$3-\frac{1}{\ell}$&$1$&$0$&$1$&$1$&$0$&$1$&$2-\frac{1}{\ell}$&$0$&$0$&$1$&$0$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$2-\frac{1}{\ell} $\\
63&$1$&$\frac{\ell-1}{\ell}$&$0$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$2-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$1$&$0$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$2-\frac{1}{\ell} $\\
64&$1$&$2-\frac{1}{\ell}$&$1$&$0$&$0$&$1$&$0$&$3-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$1$&$0 $\\
65&$1$&$3-\frac{1}{\ell}$&$1$&$0$&$0$&$0$&$3-\frac{1}{\ell}$&$0$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$1$&$1$&$0 $\\
66&$1$&$0$&$0$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$0$&$1$&$1$&$0$&$1$&$0$\\
67&$1$&$2-\frac{1}{\ell}$&$0$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$3-\frac{1}{\ell}$&$\frac{\ell-1}{\ell}$&$0$&$0$&$0$&$0$&$1$&$0$&$2-\frac{1}{\ell} $\\
68&$1$&$3-\frac{1}{\ell}$&$1$&$0$&$0$&$1$&$0$&$0$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$2-\frac{1}{\ell}$&$0$&$1$&$0$&$0$&$0$\\
69&$1$&$0$&$0$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$3-\frac{1}{\ell}$&$0$&$0$&$1$&$1$&$0$&$0$&$3-\frac{1}{\ell}$\\
70&$1$&$0$&$1$&$0$&$0$&$0$&$0$&$0$&$2-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$3-\frac{1}{\ell}$&$0$&$1$&$1$&$0$&$0 $\\
71&$1$&$3-\frac{1}{\ell}$&$0$&$0$&$3-\frac{1}{\ell}$&$1$&$0$&$0$&$2-\frac{1}{\ell}$&$0$&$0$&$0$&$0$&$0$&$0$&$2-\frac{1}{\ell}$\\
72&$1$&$0$&$0$&$0$&$3-\frac{1}{\ell}$&$0$&$0$&$0$&$3-\frac{1}{\ell}$&$0$&$0$&$0$&$0$&$0$&$0$&$3-\frac{1}{\ell} $\\
73&$1$&$0$&$1$&$0$&$0$&$0$&$0$&$0$&$3-\frac{1}{\ell}$&$0$&$3-\frac{1}{\ell}$&$0$&$0$&$0$&$0$&$0 $\\
74&$0$&$1$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0 $\\
\end{tabular}} \caption{Facets of the MDL($\ell,h$) polytope for $0<\ell<\frac{1}{4}$ and $h=1-3\ell$. The facets are given by $\sum_{abxy}\beta_{ab}^{xy}p(abxy)\leq 0$.} \end{sidewaystable}
\begin{sidewaystable}
\resizebox*{\textwidth}{!}{\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} &$\beta_{00}^{00}$&$\beta_{10}^{00}$&$\beta_{01}^{00}$&$\beta_{11}^{00}$&$\beta_{00}^{10}$&$\beta_{10}^{10}$&$\beta_{01}^{10}$&$\beta_{11}^{10}$&$\beta_{00}^{01}$&$\beta_{10}^{01}$&$\beta_{01}^{01}$&$\beta_{11}^{01}$&$\beta_{00}^{11}$&$\beta_{10}^{11}$&$\beta_{01}^{11}$&$\beta_{11}^{11}$\\ \hline
1& $1$&$\frac{(h-5) (h-1) h-1}{(h-1) ((h-3) h+1)}$&$\frac{(h-5)(h-1) h-1}{(h-1) ((h-3) h+1)}$&$\frac{h^3}{(h-1) ((h-3)h+1)}$&$\frac{(h-1) h}{(h-3)h+1}$&$\frac{h}{h-1}$&$\frac{h^3}{(h-1) ((h-3) h+1)}$&$\frac{h((h-9) h+6)-1}{(h-1) ((h-3) h+1)}$&$\frac{(h-1) h}{(h-3)h+1}$&$\frac{h^3}{(h-1) ((h-3)h+1)}$&$\frac{h}{h-1}$&$\frac{h^2}{(h-3) h+1}$&$\frac{h^2}{(h-3)h+1}$&$\frac{h^3}{(h-1) ((h-3) h+1)}$&$\frac{h^3}{(h-1) ((h-3)h+1)}$&$\frac{h}{h-1} $ \\
2& $1$&$\frac{1-3 h}{2-3 h}$&$1-\frac{h}{3 h^2-5 h+2}$&$\frac{h (3h-1)}{(h-1) (3 h-2)}$&$\frac{h}{h-1}$&$\frac{h (3 h-1)}{(h-1) (3h-2)}$&$\frac{h (3 h-1)}{(h-1) (3 h-2)}$&$\frac{1-3 h}{2-3h}$&$\frac{h (h (9 h-7)+2)}{(h-1) (3 h-2) (3 h-1)}$&$\frac{h (3h-1)}{(h-1) (3 h-2)}$&$\frac{h}{h-1}$&$\frac{(h-2) (3 h-1)}{(h-1)(3 h-2)}$&$\frac{1-3 h}{2-3 h}$&$\frac{h (3 h-1)}{(h-1) (3h-2)}$&$\frac{h}{h-1}$&$\frac{h (h (9 h-8)+2)}{(h-1) (3 h-2) (3h-1)} $ \\
3& $1$&$\frac{(h-2) (3 h-1)}{(h-1) (3 h-2)}$&$\frac{(h-2) (3 h-1)}{(h-1)(3 h-2)}$&$\frac{h (3 h-1)}{(h-1) (3 h-2)}$&$\frac{h (h (9h-8)+2)}{(h-1) (3 h-2) (3 h-1)}$&$\frac{h}{h-1}$&$\frac{h (3h-1)}{(h-1) (3 h-2)}$&$\frac{1-3 h}{2-3 h}$&$\frac{h (h (9h-8)+2)}{(h-1) (3 h-2) (3 h-1)}$&$\frac{h (3 h-1)}{(h-1) (3h-2)}$&$\frac{h}{h-1}$&$\frac{1-3 h}{2-3 h}$&$\frac{1-3 h}{2-3h}$&$\frac{h (3 h-1)}{(h-1) (3 h-2)}$&$\frac{h (3 h-1)}{(h-1) (3h-2)}$&$\frac{h}{h-1} $ \\
4& $\frac{3 (h-2) h+2}{(h-1) (3
h-1)}$&$1$&$\frac{h-2}{h-1}$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1)(3 h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$\frac{h (h (9h-8)+2)}{(1-3 h)^2 (h-1)}$&$\frac{h}{h-1}$&$\frac{h (3h-2)}{(h-1) (3 h-1)}$&$1$&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h(3 h-2)}{(h-1) (3 h-1)} $ \\
5& $\frac{3 (h-1) h+1}{h (3 h-1)}$&$1$&$\frac{h-2}{h}$&$1$&$\frac{h (3h-5)+1}{h (3 h-1)}$&$\frac{h-1}{h}$&$1$&$\frac{h-1}{h}$&$\frac{h (9h-8)+2}{(1-3 h)^2}$&$1$&$\frac{3 h-2}{3h-1}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$1$&$\frac{3 h-2}{3 h-1} $ \\
6& $\frac{3 (h-1) h+1}{h (3h-1)}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$\frac{h (9 h-5)+1}{(1-3h)^2}$&$\frac{3 h-2}{3 h-1}$&$1$&$1$&$\frac{3 h-2}{3h-1}$&$1$&$1$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$\frac{h(3 h-5)+1}{h (3 h-1)} $ \\
7& $\frac{h-1}{h}$&$1$&$0$&$\frac{h-1}{2 h-1}$&$\frac{h-1}{3h-1}$&$1$&$0$&$\frac{h}{1-2 h}$&$1$&$\frac{h-1}{h}$&$0$&$\frac{(h-1) (3h-1)}{h (2 h-1)}$&$\frac{h-1}{2 h-1}$&$1$&$0$&$\frac{1-2 h}{1-3 h} $ \\
8& $\frac{h-1}{h}$&$0$&$\frac{(h-1) (3 h-1)}{h (2 h-1)}$&$0$&$\frac{h-1}{2h-1}$&$1$&$0$&$1$&$\frac{h-1}{2 h-1}$&$0$&$1$&$2-\frac{1}{h}$&$0$&$\frac{h}{1-2 h}$&$\frac{1-2h}{1-3 h}$&$1 $ \\
9&$ \frac{(h-1) (2 h-1)}{h (3 h-1)}$&$0$&$-\frac{h+1}{h}$&$1$&$\frac{2h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{(h-1) (2 h-1)}{(1-3h)^2}$&$0$&$\frac{h-1}{3h-1}$&$\frac{h-1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$\frac{h-1}{3 h-1} $ \\
10&$\frac{h-1}{h}$&$0$&$\frac{(h-3) h+1}{h (2 h-1)}$&$\frac{1-3 h}{1-2h}$&$1$&$0$&$\frac{1-3 h}{1-2 h}$&$\frac{(1-3 h)^2}{h (2 h-1)}$&$\frac{h-1}{3h-1}$&$0$&$1$&$-\frac{1-3 h}{h-2 h^2}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3h} $ \\
11&$ \frac{h-1}{h}$&$3-\frac{1}{h}$&$-\frac{1-3 h}{h-2 h^2}$&$0$&$\frac{h (7h-5)+1}{h (6 h-5)+1}$&$1$&$\frac{1-3 h}{1-2 h}$&$0$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{(1-3 h)^2}{h (2 h-1)}$&$0$&$\frac{1-3 h}{1-2h}$&$1 $ \\
12&$\frac{h-1}{h}$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$\frac{1-3 h}{1-2h}$&$0$&$\frac{(1-3 h)^2}{h (2 h-1)}$&$\frac{h (8 h-5)+1}{h (6h-5)+1}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$\frac{1-3 h}{1-2 h}$&$1$&$\frac{h (7h-5)+1}{h (6 h-5)+1} $ \\
13&$\frac{(h-1) (2 h-1)}{h (3h-1)}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$0$&$\frac{h (8 h-5)+1}{(1-3h)^2}$&$\frac{h-1}{3 h-1}$&$1$&$0$&$\frac{h-1}{3h-1}$&$0$&$1$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{2 h}{1-3 h} $ \\
14&$ \frac{h-1}{h}$&$\frac{(h-1) (3 h-1)}{h (2 h-1)}$&$\frac{(h-1) (3h-1)}{h (2 h-1)}$&$0$&$\frac{h-1}{2 h-1}$&$1$&$0$&$\frac{5 (h-1) h+1}{h(2 h-1)}$&$\frac{h-1}{2 h-1}$&$0$&$1$&$\frac{h}{2 h-1}$&$0$&$\frac{h}{1-2h}$&$\frac{h^2}{h (6 h-5)+1}$&$1 $ \\
15&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{2 h-1}{h-1}$&$0$&$\frac{(1-3h)^2}{(h-1) h}$&$1$&$0$&$\frac{2 h-1}{h-1}$&$\frac{1-3h}{h-1}$&$0$&$-\frac{h}{h-1}$&$-\frac{h}{h-1}$&$\frac{2h-1}{h-1} $ \\
16&$ \frac{(h-3) h+1}{h (3 h-1)}$&$0$&$-\frac{1}{h}$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{(h-3) h+1}{(1-3 h)^2}$&$0$&$\frac{1-2 h}{1-3h}$&$\frac{h-1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$\frac{h-1}{3 h-1} $ \\
17& $\frac{(h-3) h+1}{h (2 h-1)}$&$\frac{(1-3 h)^2}{h (2h-1)}$&$-\frac{1-3 h}{h-2 h^2}$&$0$&$1$&$\frac{1-3 h}{1-2 h}$&$\frac{1-3 h}{1-2h}$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{(1-3 h)^2}{h (2h-1)}$&$0$&$\frac{1-3 h}{1-2 h}$&$1 $ \\
18&$\frac{(h-3) h+1}{h (3h-1)}$&$2-\frac{1}{h}$&$\frac{h-1}{h}$&$0$&$\frac{h (7 h-5)+1}{(1-3h)^2}$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$\frac{h-1}{3h-1}$&$0$&$1$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
19&$\frac{(h-3) h+1}{h (2 h-1)}$&$3-\frac{1}{h}$&$\frac{(h-1) (3h-1)}{h (2 h-1)}$&$0$&$1$&$\frac{1-3 h}{1-2 h}$&$0$&$\frac{(1-3 h)^2}{h (2h-1)}$&$\frac{h (7 h-5)+1}{h (6 h-5)+1}$&$0$&$1$&$\frac{(1-3 h)^2}{h (2h-1)}$&$0$&$\frac{1-3 h}{1-2 h}$&$\frac{1-3 h}{1-2 h}$&$1 $ \\
20&$ \frac{(1-2 h)^2}{h (3 h-1)}$&$0$&$\frac{h-1}{h}$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{(1-2 h)^2}{(1-3 h)^2}$&$0$&$\frac{1-2 h}{1-3h}$&$2-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$0$&$\frac{1-2 h}{1-3 h} $ \\
21&$2-\frac{1}{h}$&$0$&$-\frac{1-3 h}{h-2 h^2}$&$0$&$\frac{h}{1-2 h}$&$0$&$\frac{1-3h}{1-2 h}$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{(1-3 h)^2}{h(2 h-1)}$&$0$&$\frac{1-3 h}{1-2 h}$&$1 $ \\
22&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$\frac{h (7 h-5)+1}{h (6h-5)+1}$&$1$&$\frac{1-3 h}{1-2 h}$&$0$&$1$&$0$&$0$&$3-\frac{1}{h}$&$\frac{(1-3 h)^2}{h (2h-1)}$&$0$&$\frac{1-3 h}{1-2 h}$&$1 $ \\
23&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$\frac{2 h (5 h-3)+1}{h (6h-5)+1}$&$1$&$0$&$0$&$1$&$0$&$\frac{1-3 h}{1-2 h}$&$\frac{(1-3 h)^2}{h (2h-1)}$&$0$&$0$&$\frac{1-3 h}{1-2 h}$&$\frac{h}{1-2 h} $ \\
24&$\frac{h-1}{h}$&$\frac{2 (h-1)}{2 h-1}$&$1$&$\frac{2 (h-1)}{2 h-1}$&$\frac{2(h-1)}{2 h-1}$&$1$&$\frac{2 (h-1)}{2 h-1}$&$\frac{h-1}{h}$&$\frac{2 (h-1)}{2h-1}$&$\frac{h-1}{h}$&$1$&$\frac{2 (h-2) h+1}{h (2h-1)}$&$\frac{h-1}{h}$&$\frac{2 (h-1)}{2 h-1}$&$\frac{2 (h-2)h+1}{h (2 h-1)}$&$1 $ \\
25&$\frac{h-1}{h}$&$\frac{2 (h-1)}{2 h-1}$&$1$&$\frac{2 (h-1)}{2h-1}$&$1$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$\frac{2(h-1)}{2 h-1}$&$\frac{h-1}{h}$&$1$&$\frac{2 (h-2) h+1}{h (2h-1)}$&$\frac{2 (h-1)}{2 h-1}$&$\frac{2 (h-1)}{2 h-1}$&$1$&$\frac{3 h-2}{3h-1} $ \\
26&$\frac{h-1}{h}$&$\frac{2 (h-1)}{2 h-1}$&$1$&$1$&$1+\frac{1}{1-3 h}$&$\frac{2(h-1)}{2 h-1}$&$1$&$1+\frac{1}{1-3 h}$&$\frac{2 (h-1)}{2h-1}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$\frac{h-1}{\text{hh}}$&$1$&$\frac{2 (h-1)}{2 h-1} $ \\
27&$\frac{h-1}{h}$&$1$&$1$&$\frac{2 (h-1)}{2 h-1}$&$1$&$1$&$\frac{2 (h-2) h+1}{h(2 h-1)}$&$\frac{h-1}{h}$&$1$&$\frac{2 (h-2) h+1}{h (2h-1)}$&$1$&$\frac{h-1}{h}$&$1$&$1$&$1$&$\frac{2 (h-1)}{2 h-1} $ \\
28&$ \frac{h-1}{h}$&$\frac{2 (h-2) h+1}{h (2 h-1)}$&$\frac{2 (h-2)h+1}{h (2 h-1)}$&$1$&$\frac{2 (h-1)}{2 h-1}$&$1$&$1$&$\frac{2 (h-2)h+1}{h (2 h-1)}$&$\frac{2 (h-1)}{2 h-1}$&$1$&$1$&$\frac{2 h}{2h-1}$&$\frac{h-1}{h}$&$\frac{2 (h-2) h+1}{h (2 h-1)}$&$\frac{2(h-2) h+1}{h (2 h-1)}$&$1 $ \\
29&$ \frac{h-2}{h-1}$&$1$&$1$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1) (3 h-1)} $ \\
30&$\frac{h}{3 h-1}$&$1$&$2-\frac{1}{h}$&$0$&$\frac{5 (h-1) h+1}{h (3h-1)}$&$\frac{h-1}{h}$&$0$&$2-\frac{1}{h}$&$\frac{h (7 h-5)+1}{(1-3h)^2}$&$1$&$\frac{1-2 h}{1-3 h}$&$0$&$3-\frac{1}{h}$&$1$&$0$&$\frac{1-2 h}{1-3 h}$ \\
31&$\frac{h}{3 h-1}$&$0$&$0$&$1$&$\frac{h^2}{(1-3 h)^2}$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$\frac{h-1}{h}$&$0$&$\frac{5 (h-1)h+1}{h (3 h-1)} $ \\
32&$-\frac{1}{h}$&$2-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$3-\frac{1}{h}$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$1$&$1$&$\frac{1-2h}{1-3 h} $ \\
33&$-\frac{1}{h}$&$0$&$\frac{h-1}{h}$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{1}{1-3h}$&$0$&$1$&$\frac{h-1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$\frac{h-1}{3 h-1} $ \\
34&$4-\frac{2}{h}$&$0$&$\frac{h-1}{h}$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{2 (2 h-1)}{3h-1}$&$0$&$1$&$\frac{h-1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$\frac{h-1}{3 h-1} $ \\
35&$4-\frac{2}{h}$&$0$&$\frac{h-1}{h}$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{2 (2 h-1)}{3h-1}$&$0$&$1$&$5-\frac{2}{h}$&$2-\frac{1}{h}$&$0$&$1$&$\frac{5 h-2}{3 h-1} $ \\
36&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$1$&$1$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1} $ \\
37&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$1$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1} $ \\
38&$\frac{h-1}{h}$&$1$&$1$&$1$&$\frac{3 h-2}{3 h-1}$&$1$&$1$&$1$&$\frac{3 h-2}{3h-1}$&$1$&$1$&$1$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1 $ \\
39&$\frac{h-1}{h}$&$1$&$1$&$1$&$\frac{3 h-2}{3h-1}$&$1$&$1$&$1$&$1$&$1$&$1$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$1$&$1 $ \\
40&$ \frac{h-1}{h}$&$1$&$1$&$1$&$\frac{3 h-2}{3h-1}$&$1$&$1$&$1$&$1$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$1$&$1$&$1+\frac{1}{1-3 h} $ \\
41&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h (3 h-2)}{(h-1) (3h-1)}$&$\frac{h}{h-1}$&$1$&$1$&$\frac{h}{h-1} $ \\
42&$\frac{h-1}{h}$&$1$&$0$&$1$&$\frac{h-1}{3h-1}$&$1$&$0$&$1$&$1$&$\frac{h-1}{h}$&$0$&$3-\frac{1}{h}$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$\frac{1-2h}{1-3 h} $ \\
43&$\frac{h-1}{h}$&$1$&$0$&$0$&$\frac{h-1}{3 h-1}$&$1$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$\frac{h-1}{h}$&$0$&$0$&$\frac{h-1}{3 h-1}$&$1$&$0$&$\frac{1-2 h}{1-3h} $ \\
44&$\frac{h-1}{h}$&$1$&$3-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$\frac{1-2 h}{1-3h}$&$\frac{3 h-2}{3 h-1}$&$1$&$1$&$0$&$3-\frac{1}{h}$&$\frac{h-1}{h}$&$0$&$1 $ \\
46&$1$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$ \\
47&$\frac{h-1}{h}$&$0$&$\frac{h-1}{h}$&$\frac{h-1}{h}$&$1$&$0$&$1$&$\frac{h-1}{3h-1}$&$\frac{h-1}{3 h-1}$&$0$&$1$&$1$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$\frac{1-2 h}{1-3h} $ \\
48&$\frac{h-1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$0$&$0$&$0$&$0$&$1 $ \\
49&$1$&$1$&$1$&$1$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$&$\frac{h}{h-1}$ \\
50&$\frac{h-1}{h}$&$1$&$3-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$\frac{1-2 h}{1-3h}$&$\frac{2 (2 h-1)}{3 h-1}$&$0$&$1$&$0$&$0$&$2-\frac{1}{h}$&$0$&$1 $ \\
51&$\frac{h-1}{h}$&$0$&$0$&$1$&$\frac{h-1}{3h-1}$&$1$&$0$&$0$&$1$&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$1$&$1$&$0$&$\frac{1-2 h}{1-3 h} $ \\
\end{tabular}} \end{sidewaystable}
\begin{sidewaystable} \label{l1m3h}
\resizebox*{\textwidth}{0.48\textheight}{\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
&$\beta_{00}^{00}$&$\beta_{10}^{00}$&$\beta_{01}^{00}$&$\beta_{11}^{00}$&$\beta_{00}^{10}$&$\beta_{10}^{10}$&$\beta_{01}^{10}$&$\beta_{11}^{10}$&$\beta_{00}^{01}$&$\beta_{10}^{01}$&$\beta_{01}^{01}$&$\beta_{11}^{01}$&$\beta_{00}^{11}$&$\beta_{10}^{11}$&$\beta_{01}^{11}$&$\beta_{11}^{11}$\\ \hline
52&$\frac{h-1}{h}$&$0$&$0$&$0$&$\frac{h-1}{3 h-1}$&$1$&$0$&$1$&$1$&$2-\frac{1}{h}$&$0$&$0$&$\frac{1-2h}{1-3 h}$&$1$&$0$&$\frac{1-2 h}{1-3 h} $ \\
53&$\frac{h-1}{h}$&$0$&$3-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$1$&$\frac{2 (2h-1)}{3 h-1}$&$1$&$1$&$0$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$1 $ \\
54&$\frac{h-1}{h}$&$0$&$3-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$1$&$1$&$0$&$1$&$2-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3 h}$&$1 $ \\
55&$ \frac{h-1}{h}$&$0$&$2-\frac{1}{h}$&$0$&$1$&$0$&$0$&$0$&$\frac{h-1}{3h-1}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
56&$\frac{h-1}{h}$&$0$&$\frac{h-1}{h}$&$0$&$1$&$0$&$1$&$0$&$\frac{h-1}{3h-1}$&$0$&$1$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
57&$\frac{h-1}{h}$&$0$&$\frac{h-1}{h}$&$2-\frac{1}{h}$&$1$&$0$&$1$&$\frac{1-2 h}{1-3h}$&$\frac{h-1}{3 h-1}$&$0$&$1$&$0$&$1$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
58&$\frac{h-1}{h}$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$1$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$0$&$1 $ \\
59&$\frac{h-1}{h}$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$1$&$4-\frac{1}{h}$&$0$&$\frac{1-2h}{1-3 h}$&$0$&$1$&$-1$&$0$&$0$&$1$&$\frac{h}{1-3 h} $ \\
60&$\frac{h-1}{h}$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$1$&$0$&$0$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$0$&$1$&$1 $ \\
61&$\frac{h-1}{h}$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$0$&$0$&$3-\frac{1}{h}$&$\frac{1-2h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$1$&$1$&$\frac{1-2 h}{1-3 h} $ \\
62&$\frac{h-1}{h}$&$-1$&$2-\frac{1}{h}$&$0$&$\frac{h}{1-3 h}$&$1$&$0$&$0$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$4-\frac{1}{h}$&$1$&$1 $ \\
63&$\frac{h-1}{h}$&$0$&$5-\frac{2}{h}$&$0$&$1$&$4-\frac{1}{h}$&$1$&$0$&$\frac{5 h-2}{3h-1}$&$0$&$1$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
64&$\frac{h-1}{h}$&$1$&$1$&$0$&$\frac{2 (2 h-1)}{3 h-1}$&$1$&$0$&$\frac{1-2 h}{1-3h}$&$\frac{2 (2 h-1)}{3 h-1}$&$0$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$2-\frac{1}{h}$&$2-\frac{1}{h}$&$1 $ \\
65&$\frac{h-1}{h}$&$1$&$2-\frac{1}{h}$&$0$&$1$&$1$&$\frac{1-2 h}{1-3 h}$&$0$&$\frac{2 (2h-1)}{3 h-1}$&$0$&$1$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$1$&$1 $ \\
66&$\frac{h-1}{h}$&$0$&$2-\frac{1}{h}$&$1$&$1$&$0$&$0$&$3-\frac{1}{h}$&$\frac{h-1}{3h-1}$&$0$&$1$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
67&$\frac{h-1}{h}$&$2-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$0$&$0$&$1$&$1$&$1 $ \\
68&$5-\frac{2}{h}$&$0$&$2-\frac{1}{h}$&$1$&$1$&$0$&$0$&$3-\frac{1}{h}$&$\frac{5 h-2}{3h-1}$&$0$&$1$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$\frac{1-2 h}{1-3 h} $ \\
69&$2-\frac{1}{h}$&$1$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$\frac{1-2 h}{1-3 h}$&$\frac{5h-2}{3 h-1}$&$0$&$0$&$0$&$0$&$2-\frac{1}{h}$&$0$&$1 $ \\
70&$2-\frac{1}{h}$&$1$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$2-\frac{1}{h}$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$0$&$1 $ \\
71&$2-\frac{1}{h}$&$0$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$1$&$0$&$1$&$0$&$0$&$0$&$2-\frac{1}{h}$&$0$&$1$&$\frac{1-2h}{1-3 h}$&$1 $ \\
72&$2-\frac{1}{h}$&$0$&$2-\frac{1}{h}$&$2-\frac{1}{h}$&$1$&$0$&$1$&$\frac{1-2 h}{1-3 h}$&$\frac{1-2h}{1-3 h}$&$0$&$1$&$1$&$1$&$0$&$0$&$1 $ \\
73&$2-\frac{1}{h}$&$1$&$0$&$1$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$1$&$1$&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$1$&$0$&$0$&$1 $ \\
74&$2-\frac{1}{h}$&$1$&$0$&$1$&$\frac{1-2 h}{1-3h}$&$0$&$0$&$0$&$1$&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$1$&$1$&$0$&$\frac{1-2 h}{1-3 h} $ \\
75&$2-\frac{1}{h}$&$1$&$3-\frac{1}{h}$&$0$&$1$&$1$&$\frac{1-2 h}{1-3 h}$&$0$&$\frac{5 h-2}{3h-1}$&$0$&$1$&$0$&$2-\frac{1}{h}$&$0$&$0$&$0 $ \\
76&$2-\frac{1}{h}$&$0$&$0$&$1$&$\frac{1-2 h}{1-3h}$&$1$&$0$&$0$&$0$&$3-\frac{1}{h}$&$0$&$2-\frac{1}{h}$&$0$&$1$&$1$&$1 $ \\
77&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$0$&$1$&$0$&$0$&$0$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$0$&$0$&$0$&$0$&$1 $ \\
78&$2-\frac{1}{h}$&$0$&$2-\frac{1}{h}$&$0$&$0$&$0$&$0$&$0$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$0$&$1$&$1 $ \\
79&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$0$&$1$&$1$&$1$&$0$&$\frac{5 h-2}{3h-1}$&$1$&$1$&$0$&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$0 $ \\
80&$2-\frac{1}{h}$&$0$&$2-\frac{1}{h}$&$0$&$1$&$0$&$1$&$0$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$0$&$1 $ \\
81&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$1$&$0$&$0$&$1$&$0$&$1$&$0$&$0$&$0$&$0$&$0 $ \\
82&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$1$&$4-\frac{1}{h}$&$0$&$1$&$0$&$0$&$-1$&$0$&$0$&$1$&$\frac{h}{1-3 h} $ \\
83&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$1$&$0$&$0$&$1$&$0$&$0$&$3-\frac{1}{h}$&$0$&$0$&$1$&$1$ \\
84&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$1$&$\frac{1-2 h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$\frac{1-2h}{1-3 h}$&$0$&$1$&$3-\frac{1}{h}$&$2-\frac{1}{h}$&$0$&$-1$&$1 $ \\
85&$2-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$1$&$0$&$0$&$3-\frac{1}{h}$&$-1$&$\frac{1-2 h}{1-3h}$&$0$&$1$&$3-\frac{1}{h}$&$0$&$4-\frac{1}{h}$&$1$&$1 $ \\
86&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$\frac{1-2 h}{1-3h}$&$1$&$1$&$0$&$1$&$4-\frac{1}{h}$&$1$&$0$&$-1$&$0$&$3-\frac{1}{h}$&$0 $ \\
87&$2-\frac{1}{h}$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$1$&$0$&$0$&$3-\frac{1}{h}$&$1$&$0$&$0$&$3-\frac{1}{h}$&$0$&$1$&$1$&$\frac{1-2 h}{1-3 h} $ \\
88&$1-\frac{1}{2 h}$&$1$&$1$&$1$&$1$&$1$&$1$&$1-\frac{1}{2 h}$&$1$&$1$&$1$&$1-\frac{1}{2 h}$&$1-\frac{1}{2h}$&$1-\frac{1}{2 h}$&$1-\frac{1}{2 h}$&$1 $ \\
89&$1-\frac{1}{2 h}$&$1$&$1$&$1$&$1$&$1-\frac{1}{2 h}$&$1-\frac{1}{2 h}$&$1-\frac{1}{2 h}$&$1$&$1-\frac{1}{2h}$&$1-\frac{1}{2 h}$&$1-\frac{1}{2 h}$&$1$&$1$&$1$&$\frac{3 (2 h-1)}{6 h-2} $ \\
90&$3-\frac{1}{h}$&$0$&$0$&$1$&$1$&$0$&$0$&$3-\frac{1}{h}$&$1$&$0$&$0$&$3-\frac{1}{h}$&$3-\frac{1}{h}$&$0$&$0$&$1$ \\
91&$3-\frac{1}{h}$&$0$&$0$&$0$&$1$&$0$&$0$&$0$&$1$&$0$&$0$&$0$&$0$&$0$&$0$&$1 $ \\
92&$3-\frac{1}{h}$&$0$&$3-\frac{1}{h}$&$0$&$0$&$0$&$0$&$0$&$1$&$0$&$1$&$0$&$0$&$0$&$0$&$0 $ \\
93&$0$&$1$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0 $ \\
\end{tabular}}
\caption{Facets of the MDL($\ell,h$) polytope for $\frac{1}{4}<h<\frac{1}{3}$ and $\ell=1-3h$. The facets are given by $\sum_{abxy}\beta_{ab}^{xy}p(abxy)\leq 0$.}
\end{sidewaystable}
\section{Full list of facets for independent sources measurement dependent local distributions} Here we give a list of hyperplanes which we conjecture to be the complete list of facets for the case of independent sources.
\begin{sidewaystable}
\resizebox*{\textwidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} $P(0000)$&$P(1000)$&$P(0100)$&$P(1100)$&$P(0010)$&$P(1010)$&$P(0110)$&$P(1110)$&$P(0001)$&$P(1001)$&$P(0101)$&$P(1101)$&$P(0011)$&$P(1011)$&$P(0111)$&$P(1111)$\\ \hline
$\frac{(2 h_y-1) h_x^2+2 h_y^2 h_x-h_y^2}{(h_x+h_y-1) (-h_y+h_x (2h_y-1)+1)}$&$1$&$1$&$1$&$\frac{h_y}{h_y-1}$&$\frac{(2 h_x-1) h_y^4+2 (h_x-1)^2 (h_y^3-5 h_y^2+4 h_y-1)}{(h_y-1) h_y (h_x+h_y-1) (-h_y+h_x (2h_y-1)+1)}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{2 h_y+(h_x-1) \left((h_x-1) (2 h_x-1) h_y^2+2 h_x ((h_x-1) h_x+4) h_y+h_x\left(-h_x^2+h_x-4\right)\right)-1}{(h_x-1) h_x (h_x+h_y-1) (-h_y+h_x (2h_y-1)+1)}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y \left((2 h_y-1) h_x^2+2 ((h_y-4) h_y+2)h_x-(h_y-4) h_y-2\right)}{(h_x-1) (h_y-1) (h_x+h_y-1) (-h_y+h_x (2h_y-1)+1)}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1)h_y} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1)^2h_y}{h_x^2 (h_y-1)}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1)h_y^2}{h_x (h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$-\frac{h_y^2 \left((1-2 h_x)^2 h_y^2+((h_x-1) h_x ((h_x-1) h_x+4)+1) (1-2h_y)\right)}{(h_x-1) h_x^3 (h_y-1)^2 (2 h_y-1)}$&$\frac{(h_y-1)^2 (2 h_y-1) h_x^4+2 (1-2h_y)^2 h_x^3-\left(4 h_y \left(h_y^3+h_y-1\right)+1\right) h_x^2+4 h_y^4h_x-h_y^4}{(h_x-1) h_x^3 (h_y-1)^2 (2 h_y-1)}$&$\frac{(h_x-1) h_y}{h_x
(h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$-\frac{h_y \left(-2 h_y h_x^4+h_x^4+(1-2h_x)^2 h_y^2\right)}{(h_x-1) h_x^3 (h_y-1) (2 h_y-1)}$&$-\frac{h_y \left((h_x-1)^2h_x^2-2 (h_x-1)^2 h_y h_x^2+(1-2 h_x)^2 h_y^2\right)}{(h_x-1) h_x^3 (h_y-1) (2h_y-1)} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{(h_x-1)^2}{h_x^2}$&$\frac{(h_x-1)^2}{h_x^2}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}-\frac{(2 h_x-1) (h_y-1)}{h_x^2h_y}$&$\frac{(h_x-1)^2 (h_y-1)}{h_x^2 h_y}$&$\frac{h_x h_y^2}{(h_x-1)(h_y-1)^2}$&$\frac{h_x h_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{h_xh_y}{(h_x-1) (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1)(h_y-1)}{h_x h_y} $ \\
$\frac{h_x-1}{h_x}$&$-\frac{(h_x-1) (2 h_x-1) (h_y-1)^4}{h_x h_y^2 (h_x+h_y-1) (-2h_y h_x+h_x+h_y-1)}$&$-\frac{(h_x-1) h_x (2 h_y-1)}{(h_x+h_y-1) (-2 h_yh_x+h_x+h_y-1)}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$-\frac{(h_x-1) (2 h_x-1)(h_y-1) h_y}{h_x (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$-\frac{(h_x-1) h_x(h_y-1) (2 h_y-1)}{h_y (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$0$&$1$&$-\frac{(2h_x-1) h_y^2}{(h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$-\frac{h_x^2 (2h_y-1)}{(h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$0$&$\frac{h_y-1}{h_y}$&$-\frac{(2h_x-1) (h_y-1)^3}{h_y (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$-\frac{(h_x-1)^2(h_y-1) (2 h_y-1)}{h_y (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$0 $ \\
$\frac{h_x-1}{h_x}$&$-\frac{(h_x-1) (2 h_x-1) (h_y-1)^2}{h_x (h_x+h_y-1) (-2 h_yh_x+h_x+h_y-1)}$&$-\frac{(h_x-1)^3 (2 h_y-1)}{h_x (h_x+h_y-1) (-2 h_yh_x+h_x+h_y-1)}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$-\frac{(h_x-1) (2 h_x-1)(h_y-1) h_y}{h_x (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$-\frac{(h_x-1) h_x(h_y-1) (2 h_y-1)}{h_y (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$0$&$1$&$-\frac{(2h_x-1) h_y^2}{(h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$-\frac{h_x^2 (2h_y-1)}{(h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$0$&$\frac{h_y-1}{h_y}$&$-\frac{(2h_x-1) (h_y-1) h_y}{(h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$-\frac{(h_x-1)^4(h_y-1) (2 h_y-1)}{h_x^2 h_y (h_x+h_y-1) (-2 h_y h_x+h_x+h_y-1)}$&$0 $ \\
$\frac{(h_x-1) h_y (h_x+h_y-1) (-h_y+h_x (2 h_y-1)+1)}{h_x^3 (h_y-1) (2h_y-1)}$&$\frac{(h_x-1) (2 h_x-1) h_y^3}{h_x^3 (h_y-1) (2 h_y-1)}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$0$&$\frac{(h_x-1) (2 h_x-1) h_y^4}{h_x^3 (h_y-1)^2 (2h_y-1)}$&$\frac{(h_x-1) \left((2 h_y-1) h_x^2+2 h_y^2 h_x-h_y^2\right)}{h_x^3 (2h_y-1)}$&$0$&$\frac{h_x-1}{h_x}$&$\frac{h_y}{h_y-1}$&$\frac{(2 h_x-1) h_y}{h_x^2(h_y-1)}$&$\frac{h_y}{h_y-1}$&$0$&$0$&$1$&$0$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{h_x^2(h_y-1)^2} $ \\
$\frac{h_x^2 h_y^4-2 h_x (h_y-1)^2 (2 h_y-1)+(h_y-1)^2 (2 h_y-1)}{(h_x-1)^2(h_y-1)^2 h_y^2}$&$\frac{h_x^2}{(h_x-1)^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2}{(h_x-1)^2 (h_y-1)^2}$&$1$&$\frac{h_x^2 h_y}{(h_x-1)^2 (h_y-1)}$&$\frac{-2 h_y+h_x\left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1)^2 (h_y-1) h_y}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1)^2 (h_y-1)h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2\right)}{(h_x-1)^3 (h_y-1)^2}$&$\frac{h_x \left(h_x^2 (h_y-1)^2+(h_y-4) h_y-2 h_x((h_y-4) h_y+2)+2\right)}{(h_x-1)^3 (h_y-1)^2}$&$\frac{h_x \left(-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}{(h_x-1)^3(h_y-1)^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2\right)}{(h_x-1)^3 (h_y-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_xh_y}{(h_x-1) (h_y-1)}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y} $ \\
$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2(h_y-1)^2}$&$\frac{(h_x-1)^2}{h_x^2}$&$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2 (h_y-1) h_y}$&$\frac{(h_x-1)^2(h_y-1)}{h_x^2 h_y}$&$\frac{h_y}{h_y-1}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{h_x^2 (h_y-1) h_y}$&$\frac{h_x h_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x (h_y-1)^2}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1)(h_y-1)}{h_x h_y} $ \\
$1$&$\frac{h_x^2 (h_y-1)^2}{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}$&$\frac{(h_x-1)^2 (h_y-1)^2}{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}$&$\frac{(h_x-1)^2 (h_y-1)^2}{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}$&$\frac{h_x^2 (h_y-1) h_y}{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}$&$\frac{h_y-1}{h_y}$&$\frac{(h_y-1) \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2\right)}{h_y \left(-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1\right)}$&$\frac{(h_x-1)^2 (h_y-1)^3}{h_y \left(-2 h_y+h_x \left(h_x(h_y-1)^2+4 h_y-2\right)+1\right)}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x\left(h_x^2 (h_y-1)^2+(h_y-4) h_y-2 h_x ((h_y-4) h_y+2)+2\right)}{-2 h_y(h_x-1)^3+(h_x-1)^3+h_x^2 h_y^2 (h_x-1)}$&$\frac{(h_x-1) h_x (h_y-1)^2}{-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}$&$\frac{(h_y-1) (2 h_y+h_x (-8h_y+h_x (10 h_y+h_x (h_y (h_x h_y-4)+2)-5)+4)-1)}{(h_x-1) h_x h_y \left(-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1\right)}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{(h_x-1) h_x (h_y-1) h_y}{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}$&$\frac{(h_x-1) h_x (h_y-1)^3}{h_y \left(-2 h_y+h_x \left(h_x(h_y-1)^2+4 h_y-2\right)+1\right)} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{h_x^2 (2 h_y-1)}{(h_x-1)^2h_y^2}$&$1$&$\frac{(h_x-1)^2 h_y^3}{h_x^2 (h_y-1)^3}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x (h_y-1)^2}{(h_x-1)h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x(h_y-1) h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{-2 h_y^3+2 h_x (2 h_y-1) h_y^2+h_y^2+h_x^2 (h_y (h_y ((h_y-6)h_y+7)-4)+1)}{(h_x-1) h_x (h_y-1)^3 h_y} $ \\
$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2 h_y^2}$&$0$&$0$&$\frac{(h_y-1)\left((2 h_y-1) h_x^2+2 h_y^2 h_x-h_y^2\right)}{(2 h_x-1)h_y^3}$&$\frac{h_y-1}{h_y}$&$\frac{h_x^4 (h_y-1) (2 h_y-1)}{(h_x-1)^2 (2 h_x-1)h_y^3}$&$0$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x (2 h_y-1)}{(h_x-1)h_y^2}$&$0$&$\frac{h_x (h_y-1) (h_x+h_y-1) (-h_y+h_x (2 h_y-1)+1)}{(h_x-1) (2h_x-1) h_y^3}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x^3 (h_y-1) (2h_y-1)}{(h_x-1) (2 h_x-1) h_y^3}$&$0 $ \\
$\frac{(h_x-1) ((h_y-1) h_y ((h_y-1) h_y+4)+1)}{h_x h_y^2 (2h_y-1)}$&$\frac{(h_x-1) (h_y-1)^2}{h_x (2 h_y-1)}$&$\frac{h_x-1}{h_x}$&$0$&$\frac{(h_x-1)(h_y-1) h_y}{h_x (2 h_y-1)}$&$\frac{(h_x-1) (h_y-1) h_y}{h_x (2h_y-1)}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$1$&$\frac{2h_x-1}{h_x^2}$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$0$&$0 $ \\
$\frac{h_x^2}{(h_x-1)^2}$&$\frac{(h_y-1)^2}{h_y^2}$&$1$&$\frac{(h_y-1)^2}{h_y^2}$&$\frac{h_x^2h_y}{(h_x-1)^2 (h_y-1)}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1)^2 (h_y-1) h_y}$&$\frac{h_x^2 h_y}{(h_x-1)^2(h_y-1)}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1) h_x h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) (h_y-1)^2}{h_xh_y^2}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y} $ \\
$1$&$\frac{(h_x-1)^2 (h_y-1)^2}{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x^2(h_y-1)^2}{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x^2 (h_y-1)^2}{h_x^2(h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)^2 (h_y-1)h_y}{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x^2 (h_y-1)^3}{h_y \left(h_x^2(h_y-1)^2-2 h_x h_y^2+h_y^2\right)}$&$\frac{(h_y-1) (2 h_y+h_x (h_y (h_xh_y-4)+2)-1)}{h_y \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_x (h_y-1)^2}{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_x (h_y-1) h_y}{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}$&$\frac{(h_x-1) h_x (h_y-1)^3}{h_y \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}$&$\frac{(h_x-1) h_x (h_y-1) h_y}{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2} $ \\
$1$&$\frac{h_y^2 (h_x-1)^4-2 h_x^3+h_x^2+2 h_x^2 (2 h_x-1) h_y}{(h_x-1)^2\left(h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2\right)}$&$\frac{h_x^2 (h_y-1)^2}{h_x^2(h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x^2 h_y^2}{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_x^2 (h_y-1)h_y}{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2h_y}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_x h_y^2}{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}$&$\frac{h_x^3 (h_y-1)^2}{(h_x-1) \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}$&$\frac{h_x (2 h_y+h_x (h_y (h_x h_y-4)+2)-1)}{(h_x-1)\left(h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2\right)}$&$\frac{(h_x-1) h_x (h_y-1)h_y}{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{h_x^3 (h_y-1) h_y}{(h_x-1) \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}$&$\frac{h_x \left(h_x^2 (h_y-1)^4-2 h_y^3+h_y^2+2 h_x h_y^2 (2h_y-1)\right)}{(h_x-1) (h_y-1) h_y \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)} $ \\
$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1)^2h_y^2}$&$\frac{h_x^2}{(h_x-1)^2}$&$\frac{(h_y-1)^2}{h_y^2}$&$1$&$\frac{h_x^2 h_y}{(h_x-1)^2(h_y-1)}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1)^2 (h_y-1)h_y}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1)^2 (h_y-1)h_y}$&$\frac{h_y-1}{h_y}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1) h_x h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) (h_y-1)^2}{h_xh_y^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_xh_y^2}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1-\frac{(2 h_x-1) (2 h_y-1)}{(h_x-1)^2(h_y-1)^2}$&$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{h_y (h_y+2)-1}{(h_y-1)h_y}$&$\frac{h_y}{h_y-1}$&$\frac{(h_y (h_y+2)-1) h_x^2-2 h_y^2h_x+h_y^2}{(h_x-1)^2 (h_y-1) h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_x h_y^2}{(h_x-1)(h_y-1)^2}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x(h_y-1)^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{2 h_y+h_x (-4 h_y+h_x (h_y(h_y+2)-1)+2)-1}{(h_x-1) h_x (h_y-1) h_y}$&$\frac{h_x h_y}{(h_x-1)(h_y-1)}$&$\frac{(h_y (h_y+2)-1) h_x^2-2 h_y^2 h_x+h_y^2}{(h_x-1) h_x (h_y-1)h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1$&$1$&$1$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_x h_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{h_x h_y^2}{(h_x-1)(h_y-1)^2}$&$\frac{\frac{h_xh_y^2}{(h_y-1)^2}+\frac{1}{h_x}-2}{h_x-1}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1)(h_y-1)}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1)h_y} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2(h_y-1)^2}$&$1$&$\frac{h_y}{h_y-1}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{h_x^2 (h_y-1) h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_xh_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1) h_x (h_y-1)^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2}{(h_x-1) h_x (h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1)(h_y-1)}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x (h_y-1)h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}{(h_x-1) h_x (h_y-1) h_y} $ \\
$\frac{h_x^2 h_y^2}{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}$&$\frac{h_x^2(h_y-1)^2}{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}$&$\frac{h_x^2 h_y^2}{-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}$&$1$&$\frac{h_x^2 (h_y-1) h_y}{-2h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_x^3h_y^2}{-2 h_y (h_x-1)^3+(h_x-1)^3+h_x^2 h_y^2(h_x-1)}$&$\frac{h_x}{h_x-1}$&$\frac{h_x \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2\right)}{-2 h_y (h_x-1)^3+(h_x-1)^3+h_x^2 h_y^2 (h_x-1)}$&$\frac{(h_x-1) h_x(h_y-1)^2}{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}$&$\frac{h_x^3 (h_y-1)h_y}{-2 h_y (h_x-1)^3+(h_x-1)^3+h_x^2 h_y^2 (h_x-1)}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_x (h_y-1) h_y}{-2 h_y+h_x \left(h_x(h_y-1)^2+4 h_y-2\right)+1}$&$\frac{h_x \left(h_x^2 (h_y-1)^4+h_y ((h_y-1) h_y+4)(h_y-1)-2 h_x ((h_y-1) h_y ((h_y-1) h_y+4)+1)+1\right)}{(h_x-1) (h_y-1) h_y \left(-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1\right)} $ \\
$\frac{2 h_x (h_y-1)^2-(h_y-1)^2+h_x^2 h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_x^2(h_y-1)^2+2 h_x h_y^2-h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{h_y\left(h_x^2 (h_y-1)^2+2 h_x h_y^2-h_y^2\right)}{h_x^2 (h_y-1)^3}$&$\frac{(h_x(h_x+2)-1) h_y}{h_x^2 (h_y-1)}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_xh_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1) h_x (h_y-1)^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2}{(h_x-1) h_x (h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1)(h_y-1)}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{h_y \left(h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2\right)}{(h_x-1) h_x(h_y-1)^3} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_x h_y^2}{(h_x-1)(h_y-1)^2}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x h_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_xh_y}{(h_x-1) (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{(h_x-1)^2 h_y^2}$&$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2 h_y}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2h_y}$&$\frac{h_y-1}{h_y}$&$\frac{(h_y-1) \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}{(h_x-1)^2 h_y^3}$&$\frac{h_x^3 (2 h_y-1)}{(h_x-1)^3h_y^2}$&$0$&$\frac{h_x}{h_x-1}$&$\frac{h_x (h_y-1)^2}{(h_x-1) h_y^2}$&$0$&$0$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1) \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}{(h_x-1)^3 h_y^3} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{(h_x-1)^2 h_y^2}$&$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2 h_y}$&$\frac{(h_y-1) \left(-2 h_y+h_x \left(h_x(h_y-1)^2+4 h_y-2\right)+1\right)}{(h_x-1)^2 h_y^3}$&$\frac{h_y-1}{h_y}$&$\frac{(h_y-1)\left(h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2\right)}{(h_x-1)^2 h_y^3}$&$\frac{h_x \left(-2h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}{(h_x-1)^3 h_y^2}$&$\frac{h_x(h_y-1)^2}{(h_x-1) h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x (h_y-1)^2}{(h_x-1)h_y^2}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)^3}{(h_x-1)h_y^3}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1) \left(h_x^2 (h_y-1)^2-2h_x h_y^2+h_y^2\right)}{(h_x-1)^3 h_y^3} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2 (h_y-1)^2}$&$1$&$\frac{(h_x-1)^2h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{h_x^2(h_y-1)^2}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{(h_x-1)^2 (h_y-1)}{h_x^2 h_y}$&$\frac{2 h_y+h_x (h_y (h_xh_y-4)+2)-1}{(h_x-1) h_x (h_y-1)^2}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)h_x (h_y-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1)h_y} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2 (h_y-1)^2}$&$1$&$\frac{(h_x-1)^2h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{h_x^2(h_y-1)^2}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$-\frac{(2 h_x-1) h_y}{h_x^2 (h_y-1)}+1-\frac{1}{h_y}$&$\frac{h_xh_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)h_x (h_y-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1)h_y} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{(h_x-1)^2 h_y^2}$&$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$\frac{(h_y-1) (2 h_y+h_x (-4 h_y+h_x (h_y (h_y+2)-1)+2)-1)}{(h_x-1)^2h_y^3}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2 h_y}$&$\frac{(h_y-1) \left((h_y (h_y+2)-1)h_x^2-2 h_y^2 h_x+h_y^2\right)}{(h_x-1)^2h_y^3}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1) h_x h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x (h_y-1)^2}{(h_x-1)h_y^2}$&$\frac{h_x \left(h_y \left(h_y^2+h_y-3\right)+1\right)}{(h_x-1) h_y^3}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_y-1) (2 h_y+h_x (-4 h_y+h_x (h_y(h_y+2)-1)+2)-1)}{(h_x-1) h_x h_y^3}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y} $ \\
$\frac{h_y}{h_y-1}$&$0$&$\frac{h_y}{h_y-1}-\frac{(2 h_x-1) (h_y-1)}{h_x^2 h_y}$&$0$&$1$&$\frac{2h_x-1}{h_x^2}$&$1$&$0$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{(2 h_x-1) h_y}{(h_x-1)h_x (h_y-1)}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x (h_y-1)h_y}$&$0$&$\frac{h_x}{h_x-1}$&$\frac{(2 h_x-1) (h_y-1)^2}{(h_x-1) h_xh_y^2}$&$\frac{h_x-1}{h_x}$&$0 $ \\
$\frac{h_x^2 h_y^2}{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}$&$1$&$\frac{-2 h_y+(h_x-1)\left(h_y^2 (h_x-1)^3+h_x ((h_x-1) h_x+4)-2 h_x ((h_x-1) h_x+4)h_y\right)+1}{(h_x-1)^2 \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}$&$\frac{h_x^2(h_y-1)}{(h_x-1)^2 h_y}$&$\frac{h_x^2 (h_y-1) h_y}{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x^3h_y^2}{(h_x-1) \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}$&$\frac{h_x \left(-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1\right)}{(h_x-1) \left(-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_x(h_y-1)^2}{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}$&$\frac{h_x^3 (h_y-1)h_y}{(h_x-1) \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}$&$\frac{h_x^3(h_y-1) h_y}{(h_x-1) \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2\right)}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_x (h_y-1) h_y}{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2(h_y-1)^2}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{h_x^2(h_y-1)^2}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2 (h_y-1) h_y}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_x h_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{(h_x-1) h_x (h_y-1)^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2}{(h_x-1) h_x (h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1)(h_y-1)}$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_x (h_y-1)h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$1$&$1$&$\frac{(h_x-1)^2 h_y^3}{h_x^2(h_y-1)^3}-\frac{1}{h_y-1}-\frac{1}{h_y}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x^2(h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1) h_y}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_y (-2 h_y+h_x (4h_y+h_x ((h_y-4) h_y+2)-2)+1)}{(h_x-1) h_x (h_y-1)^3} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$1$&$1$&$\frac{(h_x-1)^2 h_y}{h_x^2 (h_y-1)}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{(h_x-1) h_y^2}{h_x (h_y-1)^2}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}{(h_x-1) h_x (h_y-1) h_y} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)h_y^2}{h_x (h_y-1)^2}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1) h_y} $ \\
$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)^2 (2 h_y-1)}{h_x^2 (h_y-1)h_y}$&$0$&$1$&$1$&$\frac{2 h_y-1}{h_y^2}$&$0$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{-2 h_y+h_x\left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x (h_y-1) h_y}$&$\frac{h_x (2h_y-1)}{(h_x-1) (h_y-1) h_y}$&$0$&$\frac{h_x}{h_x-1}$&$\frac{-2 h_y+h_x \left(h_x(h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x h_y^2}$&$0$&$0 $ \\
$\frac{(h_x-1) (2 h_x-1) (h_y-1) h_y}{h_x (h_x-h_y) (-2 h_yh_x+h_x+h_y)}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$\frac{h_x^3 (h_y-1) (2h_y-1)}{(h_x-1) h_y \left((2 h_y-1) h_x^2-2 h_y^2 h_x+h_y^2\right)}$&$-\frac{(h_x-1)(2 h_x-1) h_y^2}{h_x \left((2 h_y-1) h_x^2-2 h_y^2h_x+h_y^2\right)}$&$\frac{h_x-1}{h_x}$&$0$&$\frac{(h_x-1)^3 (2 h_y-1)}{h_x \left((2 h_y-1)h_x^2-2 h_y^2 h_x+h_y^2\right)}$&$\frac{(2 h_x-1) (h_y-1) h_y}{-2 h_yh_x^2+h_x^2+(2 h_x-1) h_y^2}$&$\frac{h_y-1}{h_y}$&$0$&$-\frac{(h_x-1)^2 (h_y-1) (2h_y-1)}{h_y \left(-2 h_y h_x^2+h_x^2+(2 h_x-1) h_y^2\right)}$&$\frac{(2 h_x-1)(h_y-1)^2}{-2 h_y h_x^2+h_x^2+(2 h_x-1) h_y^2}$&$1$&$0$&$\frac{h_x^2 (2 h_y-1)}{(2 h_y-1)h_x^2-2 h_y^2 h_x+h_y^2} $ \\
$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) (h_y-1)^2}{h_x h_y^2}$&$\frac{(h_x-1) (2h_y-1)}{h_x h_y^2}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$\frac{(h_x-1)(h_y-1)}{h_x h_y}$&$\frac{(h_x-1) (h_y-1) (2 h_y-1)}{h_x h_y^3}$&$0$&$1$&$\frac{-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{h_x^2 h_y^2}$&$\frac{2h_y-1}{h_y^2}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{(h_y-1)^3}{h_y^3}$&$0$&$0 $ \\
$\frac{(h_x-1)^2 (2 h_y-1)}{(2 h_y-1) h_x^2-2 h_y^2 h_x+h_y^2}$&$0$&$1$&$\frac{(1-2 h_x)h_y^2}{(2 h_y-1) h_x^2-2 h_y^2 h_x+h_y^2}$&$-\frac{h_x^2 (h_y-1) (2 h_y-1)}{h_y\left(-2 h_y h_x^2+h_x^2+(2 h_x-1) h_y^2\right)}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{(2 h_x-1)(h_y-1)^3}{h_y \left(-2 h_y h_x^2+h_x^2+(2 h_x-1) h_y^2\right)}$&$\frac{(h_x-1) h_x (2h_y-1)}{(2 h_y-1) h_x^2-2 h_y^2 h_x+h_y^2}$&$0$&$\frac{h_x}{h_x-1}$&$-\frac{h_x (2h_x-1) (h_y-1)^4}{(h_x-1) h_y^2 \left((2 h_y-1) h_x^2-2 h_y^2h_x+h_y^2\right)}$&$-\frac{(h_x-1) h_x (h_y-1) (2 h_y-1)}{h_y \left(-2 h_yh_x^2+h_x^2+(2 h_x-1) h_y^2\right)}$&$0$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$-\frac{h_x (2h_x-1) (h_y-1) h_y}{(h_x-1) \left((2 h_y-1) h_x^2-2 h_y^2 h_x+h_y^2\right)}$ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2 (h_y-1)^2}$&$1$&$\frac{(h_x-1)^2h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_y \left(-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1\right)}{h_x^2(h_y-1)^3}$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1)^2 h_y}{h_x^2 (h_y-1)}$&$\frac{(h_x-1)^2h_y}{h_x^2 (h_y-1)}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) \left(-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2h_y^2\right)}{h_x^3 (h_y-1)^2}$&$\frac{(h_x-1) \left(h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2\right)}{h_x^3 (h_y-1)^2}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$\frac{(h_x-1)^3 h_y}{h_x^3 (h_y-1)}$&$\frac{(h_x-1) h_y\left(h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2\right)}{h_x^3 (h_y-1)^3} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{(h_x-1)^2 h_y^2}$&$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$\frac{(2 h_x-1) h_y}{(h_x-1)^2 (h_y-1)}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2h_y}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{h_x (h_y-1)^2}{(h_x-1)h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_xh_y^2}$&$0$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$0$&$\frac{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}{(h_x-1) h_x (h_y-1) h_y} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2 (h_y-1)^2}$&$1$&$\frac{(h_x-1)^2h_y^2}{h_x^2 (h_y-1)^2}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{h_x^2(h_y-1)^2}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2 (h_y-1) h_y}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{(h_x-1) \left(2 h_x (h_y-1)^2-(h_y-1)^2+h_x^2 h_y^2\right)}{h_x^3(h_y-1)^2}$&$\frac{(h_x-1) \left(h_x^2 (h_y-1)^2+2 h_x h_y^2-h_y^2\right)}{h_x^3(h_y-1)^2}$&$\frac{(h_x-1) h_y^2}{h_x (h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{\left(h_x\left(h_x^2+h_x-3\right)+1\right) h_y}{h_x^3 (h_y-1)}$&$\frac{(h_x-1) \left(2 h_x(h_y-1)^2-(h_y-1)^2+h_x^2 h_y^2\right)}{h_x^3 (h_y-1) h_y}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)} $ \\
$1$&$\frac{-2 h_y (h_x-1)^2+(h_x-1)^2+((h_x-4) h_x+2) h_y^2}{(h_x-1)^2h_y^2}$&$1$&$\frac{(h_y-1)^2}{h_y^2}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x^2(h_y-1)}{(h_x-1)^2 h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{-2 h_y+h_x\left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_xh_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{(h_x-1) (h_y-1)^2}{h_x h_y^2}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{(h_x-1)^2 h_y}{h_x^2 (h_y-1)}$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1)^2h_y}{h_x^2 (h_y-1)}$&$\frac{(h_x-1) h_y^2}{h_x (h_y-1)^2}$&$\frac{(h_x-1) \left(h_x^2(h_y-1)^2+2 h_x h_y^2-h_y^2\right)}{h_x^3 (h_y-1)^2}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{((h_x-4) h_x+2)h_y}{(h_x-1) h_x (h_y-1)} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2 (h_y-1)^2}$&$\frac{h_x \left(h_x(h_y-1)^2-4 h_y+2\right)+2 h_y-1}{h_x^2 (h_y-1)^2}$&$\frac{2 h_y+h_x (h_y (h_xh_y-4)+2)-1}{h_x^2 (h_y-1)^2}$&$1$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2(h_y-1) h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{2h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x^2 h_y^4+(h_y-1) ((h_y-1) h_y+4) h_y-2h_x ((h_y-1) h_y ((h_y-1) h_y+4)+1)+1}{(h_x-1) h_x (h_y-1)^2h_y^2}$&$\frac{h_x-1}{h_x}$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_x(h_y-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1) h_x (h_y-1)h_y} $ \\
$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{h_y^2}{(h_y-1)^2}$&$1$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_x h_y^2}{(h_x-1) (h_y-1)^2}$&$\frac{-2h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x (h_y-1)^2}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x h_y}{(h_x-1) (h_y-1)}$&$\frac{h_xh_y}{(h_x-1) (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)} $ \\
$1$&$\frac{(h_y-1)^2}{h_y^2}$&$1$&$1$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x h_y^2}$&$\frac{(h_x-1)(h_y-1)^2}{h_x h_y^2}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$\frac{-2 h_y+(h_x-1)\left(h_x ((h_y-4) h_y+2)-h_y^2\right)+1}{(h_x-1) h_x (h_y-1) h_y} $ \\
$\frac{h_x-1}{h_x}$&$0$&$0$&$0$&$\frac{(h_x-1) h_x (h_y-1)}{(2 h_x-1) h_y}$&$\frac{(h_x-1)(h_y-1)}{h_x h_y}$&$\frac{(h_x-1) h_x (h_y-1)}{(2 h_x-1) h_y}$&$0$&$1$&$1$&$\frac{2h_y-1}{h_y^2}$&$0$&$\frac{((h_x-1) h_x ((h_x-1) h_x+4)+1) (h_y-1)}{h_x^2 (2 h_x-1)h_y}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)^2 (h_y-1)}{(2 h_x-1) h_y}$&$0 $ \\
$\frac{1-2 h_y}{h_y^2}$&$0$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{h_x^2h_y^2}$&$1$&$\frac{1-2 h_y}{(h_y-1) h_y}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x(1-2 h_y)}{(h_x-1) h_y^2}$&$0$&$\frac{(h_x-1) (h_y-1)^2}{h_xh_y^2}$&$\frac{h_x-1}{h_x}$&$0$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{(h_x-1) h_x (h_y-1) h_y} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_x (h_y-1)h_y}$&$0$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(1-2 h_x) h_y}{(h_x-1) h_x(h_y-1)}$&$\frac{h_x}{h_x-1}$&$0$&$\frac{h_x-1}{h_x}$&$-\frac{(2 h_x-1) (h_y-1)^2}{(h_x-1)h_x h_y^2}$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 (h_y-1)h_y}$&$0$&$\frac{h_y}{h_y-1}$&$0$&$1$&$0$&$1$&$\frac{1-2 h_x}{(h_x-1)^2} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{(h_x-1)^2 h_y^2}$&$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2 h_y}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{h_x(h_y-1)^2}{(h_x-1) h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x (h_y-1)^2}{(h_x-1)h_y^2}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y} $ \\
$\frac{h_x-1}{h_x}$&$0$&$\frac{(h_x-1)^3}{h_x^3}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$\frac{(h_x-1) (2 h_x-1) (h_y-1)}{h_x^3 h_y}$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$0$&$1$&$\frac{2 h_x-1}{h_x^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2h_y^2}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{(2 h_x-1) (h_y-1)}{h_x^2 h_y}$&$\frac{(h_x-1)^2(h_y-1)}{h_x^2 h_y}$&$0 $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1)^2 h_y^2}$&$\frac{h_x^2(h_y-1)^2}{(h_x-1)^2 h_y^2}$&$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2 h_y}$&$\frac{(h_y-1) (2 h_y+h_x (h_y (h_xh_y-4)+2)-1)}{(h_x-1)^2 h_y^3}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x (2 h_y+h_x (h_y (h_x h_y-4)+2)-1)}{(h_x-1)^3 h_y^2}$&$\frac{-2 h_y+h_x \left(8h_y+h_x \left(-10 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+5\right)-4\right)+1}{(h_x-1)^3 h_x h_y^2}$&$\frac{h_x}{h_x-1}$&$\frac{h_x(h_y-1)^2}{(h_x-1) h_y^2}$&$\frac{h_x (h_y-1) (2 h_y+h_x (h_y (h_xh_y-4)+2)-1)}{(h_x-1)^3 h_y^3}$&$\frac{h_x (h_y-1) (2 h_y+h_x (h_y (h_xh_y-4)+2)-1)}{(h_x-1)^3 h_y^3}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)\left((h_x-1)^2 h_y^2+(2-4 h_x) h_y+2 h_x-1\right)}{(h_x-1)^3 h_y^3} $ \\
$\frac{h_x-1}{h_x}$&$0$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) (h_y-1)^2}{h_xh_y^2}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$1$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4h_y-2\right)+1}{h_x^2 h_y^2}$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$0$&$0 $ \\
$\frac{h_x-1}{h_x}$&$\frac{(h_x-1) (h_y-1)^2}{h_x h_y^2}$&$0$&$0$&$\frac{(h_x-1)(h_y-1)}{h_x h_y}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$0$&$1$&$1$&$\frac{2h_y-1}{h_y^2}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$0$&$0 $ \\
$1$&$\frac{h_x^2 (h_y-1)^2-2 h_x h_y^2+h_y^2}{(h_x-1)^2h_y^2}$&$1$&$1$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x (h_y-1)^2}{(h_x-1)h_y^2}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x ((h_y-4) h_y+2)}{(h_x-1)(h_y-1) h_y} $ \\
$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{h_x^2 (h_y-1)^2-2 h_xh_y^2+h_y^2}{(h_x-1) h_x (h_y-1) h_y}$&$0$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$0$&$\frac{h_x}{h_x-1}$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$0$&$\frac{h_x^2 (h_y-1)}{(h_x-1)^2 h_y}$&$\frac{1-2 h_x}{(h_x-1)^2}$&$1$&$0$&$\frac{1-2h_y}{(h_y-1)^2} $ \\
$\frac{1-2 h_x}{h_x^2}$&$\frac{-2 h_x (h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2h_y^2}$&$0$&$1$&$\frac{(1-2 h_x) h_y}{h_x^2 (h_y-1)}$&$\frac{(h_x-1)^2 (h_y-1)}{h_x^2h_y}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{1-2 h_x}{(h_x-1)h_x}$&$\frac{h_x-1}{h_x}$&$0$&$\frac{h_x-1}{h_x}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$0$&$\frac{-2 h_y+h_x \left(h_x (h_y-1)^2+4 h_y-2\right)+1}{(h_x-1) h_x (h_y-1)h_y} $ \\
$\frac{(h_x-1)^2 h_y^2}{h_x^2 (h_y-1)^2}$&$1-\frac{(2 h_x-1) h_y^2}{h_x^2(h_y-1)^2}$&$1$&$1$&$\frac{(h_x-1)^2 h_y^3}{h_x^2 (h_y-1)^3}$&$\frac{(h_x-1)^2 h_y}{h_x^2(h_y-1)}$&$\frac{h_y}{h_y-1}$&$\frac{h_y}{h_y-1}$&$\frac{(h_x-1) h_y^2}{h_x(h_y-1)^2}$&$\frac{h_x-1}{h_x}$&$\frac{h_x-1}{h_x}$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1)h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{(h_x-1) h_y ((h_y-4) h_y+2)}{h_x (h_y-1)^3} $ \\
$-\frac{(h_x-1) (2 h_y-1)}{h_x (h_y-1) h_y}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$\frac{(h_x-1) h_y}{h_x (h_y-1)}$&$-\frac{(h_x-1) (2 h_y-1)}{h_x(h_y-1)^2}$&$0$&$\frac{h_x-1}{h_x}$&$\frac{h_x-1}{h_x}$&$\frac{1-2 h_y}{(h_y-1)h_y}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{h_x^2 (h_y-1)h_y}$&$0$&$0$&$1$&$\frac{h_y^2}{(h_y-1)^2} $ \\
$\frac{h_x-1}{h_x}$&$0$&$0$&$-\frac{h_x}{h_x-1}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$0$&$0$&$-\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$1$&$0$&$0$&$-\frac{(h_x-1)^2}{h_x^2}$&$\frac{h_y-1}{h_y}$&$0$&$0$&$-\frac{h_y}{h_y-1} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_x (h_y-1) h_y}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$0$&$\frac{h_x (1-2 h_y)}{(h_x-1) (h_y-1) h_y}$&$\frac{2h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_x(h_y-1)^2}$&$\frac{h_x}{h_x-1}$&$0$&$0$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$0$&$-\frac{(h_x-1)^2 (2h_y-1)}{h_x^2 (h_y-1) h_y}$&$1$&$1$&$0$&$\frac{1-2 h_y}{(h_y-1)^2} $ \\
$\frac{h_x-1}{h_x}$&$0$&$\frac{h_x-1}{h_x}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$1$&$\frac{2h_x-1}{h_x^2}$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$\frac{(h_x-1)^2 (h_y-1)}{h_x^2 h_y}$&$0 $ \\
$\frac{h_x-1}{h_x}$&$0$&$0$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$\frac{(h_x-1)(h_y-1)}{h_x h_y}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$1$&$1$&$\frac{-2 h_x(h_y-1)^2+(h_y-1)^2+h_x^2 h_y^2}{h_x^2h_y^2}$&$0$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{(h_x-1)^2 (h_y-1)}{h_x^2h_y}$&$0 $ \\
$-1$&$0$&$0$&$1$&$-\frac{h_y}{h_y-1}$&$0$&$0$&$\frac{h_y-1}{h_y}$&$-\frac{h_x}{h_x-1}$&$0$&$0$&$\frac{h_x-1}{h_x}$&$-\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$0$&$0$&$\frac{h_x h_y}{(h_x-1) (h_y-1)} $ \\
$\frac{2 h_y+h_x (h_y (h_x h_y-4)+2)-1}{(h_x-1) h_xh_y^2}$&$0$&$\frac{h_x-1}{h_x}$&$\frac{1-2 h_x}{(h_x-1) h_x}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$-\frac{(2 h_x-1) (h_y-1)}{(h_x-1) h_xh_y}$&$\frac{h_x^2}{(h_x-1)^2}$&$0$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$\frac{h_y-1}{h_y}$&$-\frac{(2 h_x-1)(h_y-1)}{(h_x-1)^2 h_y} $ \\
$\frac{h_x-1}{h_x}$&$\frac{1-2 h_x}{(h_x-1) h_x}$&$\frac{h_x-1}{h_x}$&$0$&$\frac{(h_x-1)(h_y-1)}{h_x h_y}$&$0$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$0$&$1$&$0$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$\frac{h_y-1}{h_y}$&$0 $ \\
$\frac{h_x-1}{h_x}$&$\frac{h_x-1}{h_x}$&$0$&$0$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$0$&$0$&$1$&$1$&$0$&$0$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$0$&$\frac{1-2 h_y}{(h_y-1) h_y}$ \\
$1$&$\frac{1-2 h_x}{(h_x-1)^2}$&$0$&$0$&$\frac{h_y}{h_y-1}$&$\frac{h_y-1}{h_y}$&$0$&$\frac{h_x^2(h_y-1)}{(h_x-1)^2 h_y}$&$\frac{h_x}{h_x-1}$&$0$&$0$&$0$&$0$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$0$&$\frac{h_x (1-2 h_y)}{(h_x-1) (h_y-1) h_y} $ \\
$\frac{h_x-1}{h_x}$&$0$&$0$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$0$&$0$&$0$&$1$&$1$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$0$&$0 $ \\
$\frac{h_x-1}{h_x}$&$0$&$\frac{h_x-1}{h_x}$&$\frac{h_x-1}{h_x}$&$\frac{(h_x-1)(h_y-1)}{h_x h_y}$&$0$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$\frac{(h_x-1) h_y}{h_x(h_y-1)}$&$1$&$1$&$1$&$0$&$\frac{h_y-1}{h_y}$&$0$&$\frac{1-2 h_y}{(h_y-1) h_y}$&$0 $ \\
$\frac{h_x-1}{h_x}$&$\frac{1-2 h_x}{(h_x-1) h_x}$&$0$&$0$&$\frac{(h_x-1) (h_y-1)}{h_xh_y}$&$\frac{(h_x-1) (h_y-1)}{h_x h_y}$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$0$&$1$&$1$&$1$&$0$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$0 $ \\
$1$&$1$&$1$&$1$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x}{h_x-1}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y}$&$\frac{h_x(h_y-1)}{(h_x-1) h_y}$&$\frac{h_x (h_y-1)}{(h_x-1) h_y} $ \\
$\frac{h_x-1}{h_x}$&$0$&$\frac{h_x-1}{h_x}$&$0$&$0$&$0$&$0$&$0$&$1$&$0$&$1$&$0$&$0$&$0$&$0$&$0 $ \\
$\frac{h_y-1}{h_y}$&$0$&$0$&$0$&$1$&$0$&$0$&$0$&$\frac{h_x (h_y-1)}{(h_x-1)h_y}$&$0$&$0$&$0$&$0$&$0$&$0$&$-\frac{h_x}{h_x-1} $ \\
$\frac{h_y-1}{h_y}$&$\frac{h_y-1}{h_y}$&$0$&$0$&$1$&$1$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0 $ \\
$0$&$1$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0$&$0 $ \\
\end{tabular}} \caption{Families of facets for independent sources. The equation of the hyperplane is given by $\sum_{abxy}\beta_{ab}^{xy}p(abxy)=0$.} \end{sidewaystable}
\end{document} | arXiv |
Vertex-transitive graph
In the mathematical field of graph theory, a vertex-transitive graph is a graph G in which, given any two vertices v1 and v2 of G, there is some automorphism
$f:G\to G\ $
Graph families defined by their automorphisms
distance-transitive → distance-regular ← strongly regular
↓
symmetric (arc-transitive) ← t-transitive, t ≥ 2 skew-symmetric
↓
(if connected)
vertex- and edge-transitive
→ edge-transitive and regular → edge-transitive
↓ ↓ ↓
vertex-transitive → regular → (if bipartite)
biregular
↑
Cayley graph ← zero-symmetric asymmetric
such that
$f(v_{1})=v_{2}.\ $
In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices.[1] A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical.
Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph).
Finite examples
Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices.[2]
Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees.[3]
Properties
The edge-connectivity of a vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3.[1] If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d.[4]
Infinite examples
Infinite vertex-transitive graphs include:
• infinite paths (infinite in both directions)
• infinite regular trees, e.g. the Cayley graph of the free group
• graphs of uniform tessellations (see a complete list of planar tessellations), including all tilings by regular polygons
• infinite Cayley graphs
• the Rado graph
Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001.[5] In 2005, Eskin, Fisher, and Whyte confirmed the counterexample.[6]
See also
• Edge-transitive graph
• Lovász conjecture
• Semi-symmetric graph
• Zero-symmetric graph
References
1. Godsil, Chris; Royle, Gordon (2013) [2001], Algebraic Graph Theory, Graduate Texts in Mathematics, vol. 207, Springer, ISBN 978-1-4613-0163-9.
2. Potočnik P., Spiga P. & Verret G. (2013), "Cubic vertex-transitive graphs on up to 1280 vertices", Journal of Symbolic Computation, 50: 465–477, arXiv:1201.5317, doi:10.1016/j.jsc.2012.09.002, S2CID 26705221.
3. Lauri, Josef; Scapellato, Raffaele (2003), Topics in graph automorphisms and reconstruction, London Mathematical Society Student Texts, vol. 54, Cambridge University Press, p. 44, ISBN 0-521-82151-7, MR 1971819. Lauri and Scapelleto credit this construction to Mark Watkins.
4. Babai, L. (1996), Technical Report TR-94-10, University of Chicago, archived from the original on 2010-06-11
5. Diestel, Reinhard; Leader, Imre (2001), "A conjecture concerning a limit of non-Cayley graphs" (PDF), Journal of Algebraic Combinatorics, 14 (1): 17–25, doi:10.1023/A:1011257718029, S2CID 10927964.
6. Eskin, Alex; Fisher, David; Whyte, Kevin (2005). "Quasi-isometries and rigidity of solvable groups". arXiv:math.GR/0511647..
External links
• Weisstein, Eric W. "Vertex-transitive graph". MathWorld.
• A census of small connected cubic vertex-transitive graphs . Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012.
| Wikipedia |
\begin{document}
\title[Composition of Segal-Bargmann transforms]{Composition of Segal-Bargmann transforms}
\author{A. Benahmadi} \author{K. Diki}
\author{A. Ghanmi} \address{
CeReMAR, A.G.S., L.A.M.A., Department of Mathematics, P.O. Box 1014,
Faculty of Sciences, Mohammed V University in Rabat, Morocco}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]} \begin{abstract}
We introduce and discuss some basic properties of some integral transforms in the framework of specific functional Hilbert spaces, the holomorphic
Bargmann-Fock spaces on $\C$ and $\C^2$ and the sliced hyperholomorphic Bargmann-Fock space on $\Hq$.
The first one is a natural integral transform mapping isometrically the standard Hilbert space on the real line into
the two-dimensional Bargmann-Fock space. It is obtained as composition of the one and two dimensional Segal-Bargmann transforms and reduces further to an extremely integral operator that looks like a composition
operator of the one-dimensional Segal-Bargmann transform with a specific symbol.
We study its basic properties, including the identification of its image and the determination of a like-left inverse defined on the whole two-dimensional Bargmann-Fock space. We also examine their combination with the Fourier transform which lead to special integral transforms connecting
the two-dimensional Bargmann-Fock space and its analogue on the complex plane.
We also investigate the relationship between special subspaces of the two-dimensional Bargmann-Fock space and the slice-hyperholomorphic one on the quaternions by introducing appropriate integral transforms. We identify their image and their action on the reproducing kernel.
\end{abstract}
\subjclass[2010]{Primary 44A15; Secondary 32A17, 32A10} \keywords{Bargmann-Fock space; Quaternions; Segal-Bargmann transform; Slice regular functions}
\maketitle
\section{Introduction} \label{s1}
The standard Segal-Bargmann transform intertwines the Schr\"odinger representation and the complex wave representation of the quantum mechanical harmonic oscillator and plays an important role in quantum optics, in signal processing and in harmonic analysis on phase space \cite{Bargmann1961,Folland1989,Zhu2012,Hall2013,Neretin1972}. Such transform has also found several applications in the theory of slice regular functions on quaternions and the slice monogenic functions on Clifford algebras \cite{DMNQ2016,KMNQ2016,PSS2016,MouraoNQ2017,DG1.2017,CD2017}. Its action on $L^{2,\nu}(\R^d,\C)$, $\nu>0$, the Hilbert space of $\C$-valued $e^{-\nu x^2}dx$-square integrable functions on the real line, is given by (with a slightly different convention from the classical one)
\begin{eqnarray}\label{dSBT} \mathcal{B}^{d,\nu} f (z) := c^\nu_d \int_{\R^d} f(x) e^{-\nu \left(x - \frac{z}{\sqrt{2}}\right)^2} dx; \quad c^\nu_d:= \left(\frac{\nu}{\pi}\right)^{\frac{3d}{4}} ,
\end{eqnarray} and made the quantum mechanical configuration space $L^{2,\nu}(\R^d,\C)$ unitarily isomorphic to the Bargmann-Fock space
$$\mathcal{F}^{2,\nu}(\C ^d) = Hol(\C^d) \cap L^{2,\nu}(\C^d,\C),$$
consisting of all $L^2$-holomorphic functions with respect to the Gaussian measure $e^{-\nu |z|^2}d\lambda$ on the $d$-dimensional complex space $\C^d$, $d\lambda$ being the Lebesgue measure on $\C^d$.
A quaternionic counterpart of $\mathcal{F}^{2,\nu}(\C)$ is the slice hyperholomorphic Bargmann-Fock space introduced in \cite{AlpayColomboSabadini2014}
as
\begin{align}\label{SliceBargmann}
\mathcal{F}^{2,\nu}_{slice}(\Hq) = \mathcal{SR}(\Hq) \cap L^{2,\nu}(\C_I,\Hq),
\end{align} where $\mathcal{SR}(\Hq)$ denotes the space of (left) slice regular $\Hq$-valued functions on the quaternions and $L^{2,\nu}(\C_I,\Hq)$ is the Hilbert space of $\Hq$-valued $L^2$ functions with respect to the Gaussian measure on an given slice $\C_I=\R+ \R I$. The corresponding Segal-Bargmann transform $\mathcal{B}_\Hq^\nu$ is considered in \cite{DG1.2017} and maps the Hilbert space $L^{2,\nu}(\R,\Hq)$ of $\Hq$-valued functions that are $e^{-\nu x^2}dx$-square integrable on the real line onto $ \mathcal{F}^{2,\nu}_{slice}(\Hq)$. Its kernel function arises naturally as the unique extension of its holomorphic counterpart involved in \eqref{dSBT} to a slice regular function. It can also be realized as the generating function of the rescaled real Hermite polynomials \begin{eqnarray} \label{wrHn} H_n^\nu(x) := (-1)^n e^{\nu x^2} \frac{d^n}{dx^n}\left(e^{-\nu x^2}\right). \end{eqnarray}
In the present paper, we deal with the following special transform
\begin{equation}\label{G}
\mathcal{G}^\nu f(z,w) = \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \mathcal{C}_{\psi_1} (\mathcal{B}^{1,\nu} f)(z,w)
\end{equation}
obtained as the composition operator $\mathcal{C}_{\psi_1} f = f\circ {\psi_1}$ of the one-dimensional Segal-Bargmann transform $\mathcal{B}^{1,\nu}$ with the specific symbol $ {\psi_1}(z,w) = \frac{z+iw}{\sqrt{2}}.$ It is a special one-to-one transform mapping the standard Hilbert space $L^{2,\nu}(\R,\C)$ on the real line into the two-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C^2)$ on the two-dimensional complex space. We study its basic properties and characterize its image. Namely, we show that $\mathcal{G}^\nu (L^{2,\nu}(\R,\C))$ coincides with (Theorem \ref{MainThm2}) \begin{equation}\label{Image}
\mathcal{A}^{2,\nu}(\C^2) := \left\{F\in \mathcal{F}^{2,\nu}(\C^2); \, \left(\frac{\partial}{\partial z} + i \frac{\partial}{\partial w}\right) F =0 \right\}. \end{equation} This was possible by realizing this transform in a natural way as the composition of the $1$d and $2$d Segal-Bargmann transforms (Theorem \ref{MainThm}), \begin{equation}\label{Gc} \mathcal{G}^\nu = \mathcal{B}^{2,\nu}\circ \mathcal{B}^{1,\nu}. \end{equation} Moreover, if $Proj$ denotes the orthogonal projection on the one-dimensional Bargmann-Fock space, we show that the transform \begin{equation}\label{R} \mathcal{R}^\nu:=(\mathcal{B}^{1,\nu})^{-1}\circ Proj\circ (\mathcal{B}^{2,\nu})^{-1} \end{equation} defined on the whole $\mathcal{F}^{2,\nu}(\C^2)$ is a like-left inverse of $\mathcal{G}^\nu$ that can be expressed in terms of the inverse of $\mathcal{B}^{1,\nu}$ and a composition operator with a specific symbol ${\psi_2}:\C \longrightarrow\C ^2$. More explicitly, we have \begin{eqnarray} \label{inverse}
\mathcal{R}^{\nu}F(x)= \left(\frac{\pi}{\nu}\right)^{\frac{1}{4}} \int_{\C} F\left(\frac{\xi}{\sqrt{2}},-i\frac{\xi}{\sqrt{2}}\right) e^{-\frac{\nu}{2}\overline{\xi}^2 +\sqrt{2}\nu x\overline{\xi}} e^{-\nu|\xi|^2}d\lambda(\xi) . \end{eqnarray} Further properties of the transform $\mathcal{G}^\nu$ when combined with the a rescaled Fourier transform are also investigated (Theorem \ref{thmLinverse}). They give rise to two extremely integral operators connecting isometrically the one-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C)$ to the two-dimensional one $\mathcal{F}^{2,\nu}(\C^2)$.
The like-left inverse $\mathcal{G}^\nu$ in \eqref{inverse} as well as the quaternionic Segal-Bargmann transform $\mathcal{B}_\Hq^\nu$ are then employed to introduce and study the integral transform \begin{equation} \label{D} \mathcal{I}^\nu:=\mathcal{B}_\Hq^\nu \circ \mathcal{R}^\nu. \end{equation} It is defined on the two-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C^2)$ with range in the slice hyperholomorphic Bargmann-Fock space $\mathcal{F}^{2,\nu}_{slice}(\Hq)$ in \eqref{SliceBargmann}. We show that $\mathcal{I}^\nu$ reduces further to the integral operator
\begin{equation}\label{BR1}
\mathcal{I}^\nu f (q) = \left(\frac{\nu}{\pi}\right) \int_{\C}
f\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) K^\nu_{\Hq}(q,\xi) e^{-\nu |\xi|^2} d\lambda(\xi).
\end{equation}
where $K^\nu_{\Hq}(q,\xi)$ is the reproducing kernel of $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ (see Theorem \ref{Iintclosed}). The image $\mathcal{I}^\nu(\mathcal{F}^{2,\nu}(\C^2))$ is identified to be $\mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ the space of slice (left) regular functions on the quaternions leaving invariant the slice $\C_i\simeq \C$ (see Theorem \ref{thmFourier}). Added to $\mathcal{I}^\nu$, we consider the integral transform
$$\mathcal{J}^{\nu}:=\mathcal{G}^{\nu} \circ (\mathcal{B}^\nu_\Hq)^{-1}$$ from $\mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ into $\mathcal{F}^{2,\nu}(\C^2)$ with image coinciding with $ \mathcal{A}^{2,\nu}(\C^2)$ (see Theorem \ref{I}). The action of $\mathcal{I}^\nu$ and $\mathcal{J}^\nu$ on the bases and the reproducing kernels are given. It turns out that these transforms connect the standard basis and the reproducing kernels of these two spaces.
To present these ideas, we adopt the following structure:
we study in Section 2 some basic properties of the integral transform \eqref{G}, we identify its image and the expression of its left inverse.
Section 3 is devoted to describe the transform $\mathcal{I}^\nu$ given through \eqref{D} as well as its inverse, and to investigate the relationship between the classical Bargmann-Fock space on $\C^2$ and the slice-hyperholomorphic one on the quaternions.
The appendix deals with a discussion concerning the high $2^n$-dimension as well as with some special integral transforms that are obtained as composition of $\mathcal{G}^\nu$ with the Fourier transform.
\section{On composition of Segal-Bargmann transforms}
The kernel function of the $d$-dimensional Segal-Bargmann transform $\mathcal{B}^{d,\nu}$ in \eqref{dSBT} is the analytic continuation to $\C^d$ of the standard Gaussian density on $\R^d$. It is given by \begin{align}\label{KernelBTn} A^{\nu}_d(z,x) = c^\nu_d e^{-\nu \left(x - \frac{z}{\sqrt{2}}\right)^2} \end{align} with $z^2:=z_1^2+z_2^2+\cdots +z_d^2$ for $z=(z_1,\cdots ,z_d)\in{\C ^d}$. Then, the integral transform in \eqref{G} acts on $L^{2,\nu}(\R,\C)$ by \begin{equation}\label{Ga}
\mathcal{G}^\nu f(z,w) := \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \int_\R f(x) A^{\nu}_1\left(\frac{z+iw}{\sqrt{2}},x\right)dx.
\end{equation} The following result shows that the transform $\mathcal{G}^\nu$ can be realized in a natural way by means of the Segal-Bargmann transforms $\mathcal{B}^{d,\nu}$; $d=1,2$, according to the following diagram
$$\xymatrix{
L^{2,\nu}(\R,\C) \ar[r]^{\mathcal{B}^{1,\nu}}
\ar[rd]_{\mathcal{G}^\nu} & \mathcal{F}^{2,\nu}(\C) \ar[d]^{\mathcal{B}^{2,\nu}}\\
& \mathcal{F}^{2,\nu}(\C^2)
}$$
\begin{thm}\label{MainThm} The above diagram is commutative, in the sense that we have $\mathcal{G}^\nu=\mathcal{B}^{2,\nu}\circ \mathcal{B}^{1,\nu}$ on $L^{2,\nu}(\R,\C)$. Moreover, $\mathcal{G}^\nu$ defines an isometric operator mapping the Hilbert space $L^{2,\nu}(\R,\C)$ into the Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C ^2)$.
\end{thm}
\begin{proof} For every given $\varphi\in L^{2,\nu}(\R,\C)$, the function $\mathcal{B}^{2,\nu}\circ \mathcal{B}^{1,\nu}(\varphi)$ is clearly a holomorphic function on $\C^2$ and belongs to $L^{2,\nu}(\C^2,\C)$. Moreover, $\mathcal{B}^{2,\nu}\circ \mathcal{B}^{1,\nu}$ defines an isometric operator from $L^{2,\nu}(\R,\C)$ into $\mathcal{F}^{2,\nu}(\C^2)$ since $\mathcal{B}^{2,\nu}$ and $ \mathcal{B}^{1,\nu}$ are. To conclude for the proof of Theorem \ref{MainThm}, we only need to show that the diagram is commutative. Thus, for every given $z,w\in{\C }$ and $x,y \in\R$, we have
\begin{align}
\mathcal{B}^{2,\nu}\circ \mathcal{B}^{1,\nu} f(z,w)
&= \int_{\R^2} \int_{\R} f(t) A^{\nu}_2((z,w),(x,y)) A^{\nu}_1(x+iy,t) dtdxdy \nonumber
\\&= c^\nu_2 c^\nu_1 \int_{\R^2} \int_{\R}
f(t) e^{-\nu \left\{ \left(x - \frac{z}{\sqrt{2}}\right)^2 + \left(y - \frac{w}{\sqrt{2}}\right)^2 +
\left(t - \frac{x+iy}{\sqrt{2}}\right)^2 \right\}} dtdxdy \nonumber
\\ & \stackrel{(*)}{=}
\left(\frac{\pi}{\nu}\right) c^\nu_2 \int_{\R} f(t) A^{\nu}_1 \left(\frac{z+iw}{\sqrt{2}},t\right) dt. \label{diag1}
\end{align} The transition $(*)$ follows by direct computation, making appeal of the Fubini theorem as well as the explicit formula for the Gaussian integral. The proof of the theorem is completed by comparing the right-hand side of \eqref{diag1} to \eqref{Ga}.
\end{proof}
The next result identifies the image of $L^{2,\nu}(\R,\C)$ by the one-to-one transform $\mathcal{G}^\nu$, and characterizes it as the kernel $\ker_{\mathcal{F}^{2,\nu}(\C^2)} (D_{z,w}) $ of the first order differential operator $$ D_{z,w} := \frac{\partial}{\partial z} + i \frac{\partial}{\partial w} $$ acting on $\mathcal{F}^{2,\nu}(\C^2)$. More precisely, we assert the following
\begin{thm}\label{MainThm2} Keep notations as above and define $\mathcal{A}^{2,\nu}(\C^2)$ as in \eqref{Image},
$\mathcal{A}^{2,\nu}(\C^2) :=\ker_{\mathcal{F}^{2,\nu}(\C^2)} (D_{z,w}) .$ Then, we have \begin{enumerate}
\item[(i)] $\mathcal{A}^{2,\nu}(\C^2)$ is a closed subspace of $\mathcal{F}^{2,\nu}(\C^2)$.
\item[(ii)] The functions $e_m^\nu(z,w):= (z+iw)^m $ form an orthogonal basis of the Hilbert space $\mathcal{A}^{2,\nu}(\C^2)$.
\item[(iii)] $\mathcal{A}^{2,\nu}(\C^2) = \mathcal{G}^\nu (L^{2,\nu}(\R,\C))$. \end{enumerate}
\end{thm}
\begin{proof} Notice first that by the definition in \eqref{G} and the fact that
$$\mathcal{B}^{1,\nu} (H_m^\nu)(\xi) = \left(\frac{\nu}{\pi}\right)^{{1}/{4}}\sqrt{2}^m\nu^m \xi^m ,$$ the action of $\mathcal{G}^\nu$ on the rescaled Hermite polynomials in \eqref{wrHn} is given by
\begin{eqnarray}
\mathcal{G}^\nu (H_m^\nu )(z,w) &=& \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \mathcal{B}^{1,\nu} (H_m^\nu)\left(\frac{z+iw}{\sqrt{2}}\right)
\nonumber \\ &=&
c^\nu_1\nu^m (z+iw)^m . \label{basisImage}
\end{eqnarray}
Subsequently, the image $\mathcal{G}^\nu (L^{2,\nu}(\R,\C))$ is then spanned by the functions $e_m^\nu(z,w):= (z+iw)^m $, since the polynomials $H_k^\nu$ form an orthogonal basis of $L^{2,\nu}(\R,\C)$. Accordingly, the proof of $(i)$ readily follows and then $\mathcal{A}^{2,\nu}(\C^2)$ is a Hilbert space for the scalar product induced from $\mathcal{F}^{2,\nu}(\C^2)$, while $(iii)$ is an immediate consequence of $(ii)$. Moreover, the functions $e_k^\nu(z,w)$ satisfy $D_{z,w} e_k^\nu(z,w)=0$ and form an orthogonal system in the Hilbert space $\mathcal{A}^{2,\nu}(\C^2)$. To conclude for $(ii)$, we should prove completeness of $e_k^\nu(z,w)$ in $\mathcal{A}^{2,\nu}(\C^2)$. To this end, let $F\in \mathcal{F}^{2,\nu}(\C^2)$ such that $D_{z,w} F=0$ and $\scal{F,e_k^\nu}_{L^{2,\nu}(\C^2,\C)} =0$ for all $k$ and show that $F$ is then identically zero on $\C^2$. Indeed, by expanding $F$ as series
$F(z,w) =\sum\limits_{m,n=0}^{+\infty} a_{m,n} z^mw^n\in \mathcal{F}^{2,\nu}(\C^2)$, we show that $$\scal{F,e_k^\nu}_{L^{2,\nu}(\C^2,\C)} = \sum_{j=0}^{k} \binom{k}{j} (-i)^j a_{k-j,j}\norm{e_{k-j}}_{L^{2,\nu}(\C,\C)}^2\norm{e_{j}}_{L^{2,\nu}(\C,\C)}^2,$$ where $e_{j}(\xi)= \xi^j$. Hence $\scal{F,e_k^\nu}_{L^{2,\nu}(\C^2,\C)}=0$, for every $k=0,1,\cdots $, implies that \begin{equation}\label{Df0}
\left(\frac{\pi}{\nu}\right)^2 \frac{k!}{\nu^k} \sum_{j=0}^{k} \left(-i\right)^j a_{k-j,j} =0.
\end{equation}
Moreover, we can show that the condition $D_{z,w} F=0$ is equivalent to that
$$ a_{m+1,n} = - i \left(\frac{n+1}{m+1}\right)a_{m,n+1}$$ for all $m,n=0,1,\cdots $, which by induction infers \begin{equation}\label{Recamn}
a_{m,n} = i^n \left(\frac{(m+n)!}{m!n!}\right) a_{m+n,0} , \quad m= 0,1,\cdots; \, n=1,\cdots.
\end{equation}
Inserting this in \eqref{Df0}, it yields $a_{k,0}=0$ for all $k$ and therefore $a_{m,n}=0$ for all $m,n$ by means of \eqref{Recamn}.
This yields the required result. \end{proof}
\begin{rem} The space $\mathcal{A}^{2,\nu}(\C^2)$ is on interest in itself. It is the phase space in $2$-complex dimesion that is unitary isomorphic of the configuration space $L^{2,\nu}(\R,\C)$. Moreover, the transform $\mathcal{G}^\nu$ is a coherent state transform from $L^{2,\nu}(\R,\C)$ onto $\mathcal{A}^{2,\nu}(\C^2)$ in \eqref{Image}, in the sense that
its kernel function can be recovered as a bilinear generating function of the orthonormal bases of the Hilbert spaces $L^{2,\nu}(\R,\C)$ and $\mathcal{A}^{2,\nu}(\C^2)$.
\end{rem}
\begin{rem} The assertion $(iii)$ in Theorem \ref{MainThm2} shows that $ \mathcal{B}^{2,\nu} (\mathcal{F}^{2,\nu}(\C))=\mathcal{A}^{2,\nu}(\C^2)$. This is in fact contained in \eqref{basisImage}. Indeed, for $e_m(\xi)=\xi^m$, we have
$$ \mathcal{B}^{2,\nu}\left(e_m \right)(z,w) = \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \left(\frac 12 \right)^{\frac{m}{2}} e_m^\nu(z,w).$$
\end{rem}
\begin{rem} The inverse transform of $\mathcal{G}^\nu$ is defined from $\mathcal{A}^{2,\nu}(\C^2)$ onto $L^{2,\nu}(\R,\C)$ and is clearly given by
$ (\mathcal{B}^{1,\nu})^{-1}\circ(\mathcal{B}^{2,\nu})^{-1}$ and coincides with the restriction to $\mathcal{A}^{2,\nu}(\C^2)$ of $\mathcal{R}^\nu$ introduced below. \end{rem}
Now, let us consider the transform $\mathcal{R}^\nu$ from $\mathcal{F}^{2,\nu}(\C ^2)$ into $L^{2,\nu}(\R,\C)$ defined by the following commutative diagram $$\xymatrix{
\mathcal{F}^{2,\nu}(\C^2) \ar[r]^{\mathcal{R}^\nu} \ar[d]_{(\mathcal{B}^{2,\nu})^{-1}} & L^{2,\nu}(\R,\C) \\
L^{2,\nu}(\R^2,\C) \ar[r]_{Proj} & \mathcal{F}^{2,\nu}(\C) \ar[u]_{(\mathcal{B}^{1,\nu})^{-1}},
}$$
where $Proj$ stands for the orthogonal projection from $L^{2,\nu}(\C,\C)$ onto the standard Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C)$ and given by (see for example \cite{Zhu2012})
\begin{align} \label{proj}
Proj f (\xi) = \left(\frac{\nu}{\pi}\right) \int_{\C} f(\zeta) e^{\nu\xi\overline{\zeta}} e^{-\nu|\zeta|^2}d\lambda(\zeta). \end{align} The following result gives an integral representation of the operator $\mathcal{R}^\nu:=(\mathcal{B}^{1,\nu})^{-1}\circ Proj\circ (\mathcal{B}^{2,\nu})^{-1}$. It involves of the inverse of $\mathcal{B}^{1,\nu}$ and the composition operator $\mathcal{C}_{{\psi_2}}F = F \circ {\psi_2}$ with the symbol function ${\psi_2}:\C\longrightarrow\C^2$ given by \begin{align} \label{chi}
{\psi_2}(\xi):= \left(\frac{\xi}{\sqrt{2}},-i\frac{\xi}{\sqrt{2}}\right).
\end{align}
\begin{thm}\label{thmLinverse} The transform $\mathcal{R}^\nu$ defined on the whole $\mathcal{F}^{2,\nu}(\C^2)$ looks like a left inverse of $\mathcal{G}^\nu$. Moreover, we have
\begin{eqnarray} \label{inversechi}
\mathcal{R}^\nu F=\displaystyle\left(\frac{\pi}{\nu}\right)^{\frac{1}{2}}(\mathcal{B}^{1,\nu})^{-1}(\mathcal{C}_{{\psi_2}}F)
\end{eqnarray}
for every $F\in \mathcal{F}^{2,\nu}(\C ^2) $ which explicitly reads,
\begin{eqnarray} \label{inversea} \mathcal{R}^{\nu}F(x)=\displaystyle\left(\frac{\pi}{\nu}\right)^{\frac{1}{4}}\int_\C F\left(\frac{\xi}{\sqrt{2}},-i\frac{\xi}{\sqrt{2}}\right) e^{-\frac{\nu}{2}\overline{\xi}^2 +\sqrt{2}\nu x\overline{\xi}} e^{-\nu\vert{\xi}\vert^2}d\lambda(\xi) . \end{eqnarray}
\end{thm}
\begin{proof}
For every $f\in L^{2,\nu}(\R,\C)$, the function $\mathcal{B}^{1,\nu} f$ belongs to the one-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C)$ and therefore
$Proj(\mathcal{B}^{1,\nu}f)=\mathcal{B}^{1,\nu} f$, so that $\mathcal{R}^\nu \circ \mathcal{G}^\nu=id_{L^{2,\nu}(\R,\C)}$.
This shows that $\mathcal{R}^\nu$ is a left inverse of $\mathcal{G}^\nu$.
Moreover, making use of the integral representation of the orthogonal projection \eqref{proj} and of
$$
(\mathcal{B}^{2,\nu})^{-1} F (\zeta)
= c^\nu_2
\int_{\C^2} F(z,w) e^{-\frac{\nu}{2}(\bz^2+\bw^2)+ \frac{\nu}{\sqrt{2}} (\zeta[\bz - i\bw] + \overline{\zeta}[\bz + i\bw]) } e^{-\nu(|z|^2+|w|^2)}d\lambda(z,w)
$$ for $\zeta\in \C$ and $F\in \mathcal{F}^{2,\nu}(\C^2)$, we get \begin{align*} Proj (\mathcal{B}^{2,\nu})^{-1} F(\xi)
=c^\nu_2 \int_{\C^2}e^{-\frac{\nu}{2}(\bz^2+\bw^2)} F(z,w)I(\xi,\bz,\bw) e^{-\nu(|z|^2+|w|^2)}d\lambda(z,w), \end{align*}
where for $\xi,z,w \in \C$ we have
\begin{align*}
I(\xi,\bz,\bw) &:=\left(\frac{\nu}{\pi}\right) \int_{\C} e^{-\nu |\zeta|^2 + \frac{\nu}{\sqrt{2}} (\zeta[\bz - i\bw] + \overline{\zeta}[\bz + i\bw + \sqrt{2}\xi]) } d\lambda(\zeta)
\\& = e^{\frac{\nu}{2}(\bz^2+\bw^2)+\nu\xi\frac{(\bz-i\bw)}{\sqrt{2}}}.
\end{align*}
Therefore, by the reproducing property for the two-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C^2)$, we obtain
\begin{align*}
Proj (\mathcal{B}^{2,\nu})^{-1} F (\xi) &= c^\nu_2
\int_{\C^2} F(z,w) e^{\nu\left(\frac{\xi}{\sqrt{2}}\bz-\frac{i\xi}{\sqrt{2}}\bw\right)} e^{-\nu(|z|^2+|w|^2)}d\lambda(z,w) \label{kernelC2}
\\&
= \left(\frac{\pi}{\nu}\right)^{\frac{1}{2}} F\left(\frac{\xi}{\sqrt{2}},-i\frac{\xi}{\sqrt{2}}\right) \nonumber
\end{align*}
for
\begin{equation}\label{Rpkernel2}
K^\nu_2 \left((u,v),(z,w)\right) = \left(\frac{\nu}{\pi}\right)^2 e^{\nu\left(u\bz+v\bw\right)}
\end{equation}
being the reproducing kernel of $\mathcal{F}^{2,\nu}(\C^2)$. \end{proof}
\section{Connecting holomorphic and slice hyperholomorphic Bargmann-Fock spaces}
The slice hyperholomorphic quaternionic Bargmann-Fock space $\mathcal{F}_{slice}^{2,\nu}(\Hq)$, considered in \cite{AlpayColomboSabadini2014}, is a quaternionic counterpart of the holomorphic Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C)$. It is defined to be the right $\Hq$-vector space of all slice left regular functions on $\Hq$, $F\in \mathcal{SR}(\Hq)$, subject to the norm boundedness $\norm{F}_{\mathcal{F}_{slice}^{2,\nu}(\Hq)}^2 < +\infty $. This norm is associated to the inner product \begin{equation}\label{spfg}
\scal{F,G}_{\mathcal{F}_{slice}^{2,\nu}(\Hq)} = \int_{\C_I}\overline{G_I(q)}F_I(q)e^{-\nu |q|^2} d\lambda_I(q), \end{equation}
where for given $I\in \Sq=\{I\in \Hq; \, I^2=-1\}$, the function $F_I = F|_{\C_I}$ denotes the restriction of $F$ to the slice $\C_I:=\R+I\R$ and $d\lambda_I(q)=dxdy$ for $q=x+yI$. It was shown in \cite{AlpayColomboSabadini2014} that $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ does not depend on the choice of the imaginary unit $I$ and is a reproducing kernel Hilbert space, whose the reproducing kernel is given by \begin{equation}\label{RpKernelH} K^\nu_{\Hq}(q,p)= \left(\frac{\nu}{\pi}\right) e_{*}^{\nu[q,\overline{p}]} := \left(\frac{\nu}{\pi}\right) \sum_{m=0}^{+\infty}
\frac{\nu^m q^m \overline{p}^m}{m!} ; \quad p,q\in \Hq. \end{equation} This space is closely connected to $L^{2,\nu}(\R,\Hq)$, the Hilbert space of all $\Hq$-valued and $L^2$ functions on the real line with respect to the Gaussian measure. In fact, $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ can be realized as the image of $L^{2,\nu}(\R,\Hq)$ by considering the quaterenionic Segal-Bargmann transform \cite{DG1.2017} \begin{eqnarray} \label{defQSBT}
\mathcal{B}_\Hq^\nu f(q) := c^\nu_1 \int_{\R} f(x) e^{-\nu \left(x - \frac{q}{\sqrt{2}}\right)^2} dx.
\end{eqnarray}
Its inverse transform mapping $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ onto $L^{2,\nu}(\R,\Hq)$ is given by
\begin{align}\label{inverseExpIntRep}
(\mathcal{B}_\Hq^\nu)^{-1}F(x) &= c^\nu_1 \int_{\C_I} F_I(q) e^{-\frac{\nu}{2}\overline{q}^2 +\sqrt{2}\nu x\overline{q}} e^{-\nu|q|^2} d\lambda_I(q). \end{align}
Examples of slice hyperholomorphic functions in $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ can also be obtained from the one of the standard Bargmann-Fock space on $\C$ by the extension lemma below.
\begin{lem}[\cite{ColomboSabadiniStruppa2011,GentiliStoppatoStruppa2013}]\label{extensionLem} Let $\Omega_I=\Omega\cap \C_I$; $I\in \mathbb{S}$, be a symmetric domain in $\C_I$ with respect to the real axis such that $\Omega_I\cap \R$ is not empty and $\overset{\sim}\Omega=\underset{x+yJ\in{\Omega}}\cup x+y\mathbb{S}$ be the symmetric completion of $\Omega_I$. For every holomorphic function $F:\Omega_I\longrightarrow \Hq$, the function $Ext(F)$ defined by $$Ext(F)(x+yJ):= \dfrac{1}{2}[F(x+yI)+F(x-yI)]+\frac{JI}{2}[F(x-yI)-F(x+yI)]; \quad J\in \mathbb{S},$$ extends $F$ to a regular function on $\overset{\sim}\Omega$. Moreover, $Ext(F)$ is the unique slice regular extension of $F$. \end{lem} This lemma can be extended to the context of the two-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C^2)$ on $\C^2$. This lies on the simple idea that consists of considering an appropriate restriction operator from $\mathcal{F}^{2,\nu}(\C ^2)$ into $\mathcal{F}^{2,\nu}(\C )$ and next apply the extension Lemma \ref{extensionLem}. For example, one can consider
$$ \mathcal{I}^\nu: F \longmapsto F\circ {\psi_2} \longmapsto Ext(F\circ {\psi_2})$$ from $\mathcal{F}^{2,\nu}(\C^2)$ into a specific subspace of $\mathcal{F}_{slice}^{2,\nu}(\Hq)$, where ${\psi_2}:\C \longrightarrow\C ^2$ is the one defined in \eqref{chi}. The following result shows that the transform $\mathcal{I}^\nu$ is in fact realized by the following commutative diagram \[ \xymatrix{ \mathcal{F}^{2,\nu}(\C^2) \ar[r]^{\mathcal{I}^\nu} \ar[d]_{\mathcal{R}^\nu} & \mathcal{F}_{slice}^{2,\nu}(\Hq)\\ L^{2,\nu}(\R,\C) \ar[r]_{inj} & L^{2,\nu}(\R,\Hq) \ar[u]_{\mathcal{B}^\nu_\Hq }} \] where $\mathcal{B}^\nu_\Hq$ is the quaternionic Segal-Bargmann transform in \eqref{defQSBT} and $\mathcal{R}^\nu$ is the transform given by \eqref{inversechi}.
\begin{thm} \label{Iintclosed} The transform $\mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu$ coincides with $\mathcal{I}^\nu$ and acts on $\mathcal{F}^{2,\nu}(\C^2)$ by
\begin{equation}\label{BR1}
\mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu F (q) = \left(\frac{\nu}{\pi}\right) \int_{\C}
F\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) K^\nu_{\Hq}(q,\xi) e^{-\nu |\xi|^2} d\lambda(\xi),
\end{equation}
where $K^\nu_{\Hq}(q,\xi)$ is the reproducing kernel of $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ as given by \eqref{RpKernelH}.
\end{thm}
For the proof, we will make use of the identity principle for slice regular functions
\begin{lem}[\cite{ColomboSabadiniStruppa2011,GentiliStoppatoStruppa2013}]\label{IdentityPrinciple} Let $F$ be a slice regular function on a slice domain $\Omega$ and denote by $\mathcal{Z}_F$ its zero set. If $\mathcal{Z}_F \cap \C_I$ has an accumulation point in $\Omega_I$ for some $I\in \Sq$, then $F$ vanishes identically on $\Omega$. \end{lem}
\begin{proof}
On one hand, the function $ \mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu F$ is slice regular by construction.
On the other hand, one can show easily that the function $\xi \longmapsto F\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) $ belongs to the one-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C)$ for every $F\in \mathcal{F}^{2,\nu}(\C^2)$, and therefore its extension
given by Lemma \ref{extensionLem}, is slice regular and belongs to $\mathcal{F}_{slice}^{2,\nu}(\Hq)$.
Moreover, by means of the reproducing property for the elements in $\mathcal{F}_{slice}^{2,\nu}(\Hq)$, we obtain the following identity
\begin{equation}\label{BR1}
\mathcal{I}^\nu F (q) = \left(\frac{\nu}{\pi}\right) \int_{\C}
F\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) K^\nu_{\Hq}(q,\xi) e^{-\nu |\xi|^2} d\lambda(\xi).
\end{equation} To conclude that $\mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu F$ and $\mathcal{I}^\nu F$ are identically the same, we need only to prove it for their restrictions on $\C_i\simeq \C$ and then apply the identity principle for the slice left regular functions (Lemma \ref{IdentityPrinciple}). To this end, we begin by rewriting the transforms $\mathcal{B}^\nu_\Hq$ and $\mathcal{R}^\nu$ as
\begin{align*}
\mathcal{B}^\nu_\Hq f(q) &= \scal{f, \overline{S^\nu(q,\cdot)}}_{L^{2,\nu}(\R,\C)}
= \int_{\R} f(x) S^\nu(q,x) e^{-\nu x^2} dx
\end{align*}
and
\begin{align*}
\mathcal{R}^\nu F(x) &= \scal{\mathcal{C}_{{\psi_2}} F, S^\nu(\cdot,x)}_{L^{2,\nu}(\C,\C)}
= \int_{\C}F(\xi) S^\nu(\overline{\xi},x) e^{-\nu |\xi|^2} d\lambda(\xi),
\end{align*} where $S^\nu$ denotes the generating function of the rescaled Hermite polynomials $H^\nu_m$ given by
\begin{align}
S^\nu(q,x) &= \left(\frac{\nu}{\pi}\right)^{\frac 12}\sum_{m=0}^{+\infty} \left(\frac{\nu^m}{m!}\right)^{\frac 12} \frac{q^m H^\nu_m(x)}{\norm{H^\nu_m}_{L^{2,\nu}(\R,\C)}} = \left(\frac{\nu}{\pi}\right)^{\frac 34} e^{-\frac{\nu}{2}q^2 +\sqrt{2}\nu qx}.\label{Genfct1}
\end{align}
Such kernel function satisfies
\begin{eqnarray}\label{genss1}
\scal{ S^\nu(q,\cdot), S^\nu(\xi,\cdot)}_{L^{2,\nu}(\R,\C)}
= \left(\frac{\nu}{\pi}\right) \sum_{m=0}^{+\infty} \frac{\nu^m q^m \overline{\xi}^m}{m!} \nonumber
=: \left(\frac{\nu}{\pi}\right) e_{*}^{\nu[q,\overline{\xi}]} \nonumber
= K^\nu_{\Hq}(q,\xi).
\end{eqnarray} Thus, for every $F\in \mathcal{F}^{2,\nu}(\C^2)$ and $q\in \C_i\simeq \C$, we have
\begin{eqnarray}
\mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu F (q)
&=& \scal{\mathcal{C}_{{\psi_2}} F, \scal{ S^\nu(q,\cdot), S^\nu(\cdot\cdot,\cdot) }_{L^{2,\nu}(\R,\C)} }_{L^{2,\nu}(\C,\C)} \nonumber
\\ &=& \left(\frac{\nu}{\pi}\right) \int_{\C} F\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) e_{*}^{\nu[q,\overline{\xi}]} e^{-\nu |\xi|^2} d\lambda(\xi) \label{BR}
\\&=& \left(\frac{\nu}{\pi}\right) \int_{\C} F\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) e^{\nu q \overline{\xi}} e^{-\nu |\xi|^2} d\lambda(\xi) \nonumber
\\& =& F\left(\frac{q}{\sqrt{2}}, \frac{-iq}{\sqrt{2}} \right) \nonumber \\& =:& \mathcal{I}^\nu F(q), \nonumber
\end{eqnarray}
since $({\nu}/{\pi}) e^{\nu q \overline{\xi}}$ is the reproducing kernel of $\mathcal{F}^{2,\nu}(\C)$ and $\xi \longmapsto F\left(\frac{\xi}{\sqrt{2}}, \frac{-i\xi}{\sqrt{2}} \right) \in \mathcal{F}^{2,\nu}(\C)$. The proof is completed. \end{proof}
The following result identifies $\mathcal{I}^\nu(\mathcal{F}^{2,\nu}(\C ^2))$ as the specific subspace of slice regular functions in $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ leaving the slice $\C_i$ invariant, $$\mathcal{F}_{slice,i}^{2,\nu}(\Hq) :=\{ F\in \mathcal{F}_{slice}^{2,\nu}(\Hq); \, F(\C_i)\subset \C_i \}.$$ Its sequential characterization reads
$$\mathcal{F}_{slice,i}^{2,\nu}(\Hq) = \left\{ F(q) =\sum_{m=0}^{+\infty} q^{m} c_m; \, c_m \in \C_i, \sum_{m=0}^{+\infty}\frac{m!}{\nu^m}|c_{m}|^{2}<+\infty \right\} . $$
\begin{thm} \label{I} The transform $\mathcal{I}^\nu$ maps $\mathcal{F}^{2,\nu}(\C^2)$ onto $\mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ and its action on the reproducing kernel $K^\nu_2((u,v),(z,w))$ in \eqref{Rpkernel2} is given by \begin{eqnarray}\label{Ikernel} \mathcal{I}^\nu(K^\nu_2(\cdot,(z,w)))(q) = K^\nu_{\Hq}\left(q,\frac{z+iw}{\sqrt{2}}\right). \end{eqnarray} \end{thm}
\begin{proof} Let $F(z,w) =\sum\limits_{m,n=0}^{+\infty} a_{m,n} e_{m,n}(z,w)\in \mathcal{F}^{2,\nu}(\C^2)$, where $e_{m,n}(z,w)=z^m w^n$.
By means of Theorem \ref{Iintclosed}, we have $ \mathcal{I}^\nu F = \mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu F \in \mathcal{F}_{slice}^{2,\nu}(\Hq)$. Moreover, for every $q\in \Hq$, we have
$$ \mathcal{I}^\nu(e_{m,n})(q)= Ext (\mathcal{C}_{\psi_2} e_{m,n})(q) = q^{m+n} (-i)^n 2^{-\frac{m+n}{2}} $$
since $\mathcal{C}_{\psi_2} e_{m,n}(\xi) = (-i)^n 2^{-\frac{m+n}{2}} \xi^{m+n}$. Therefore
$$
\mathcal{I}^\nu(f)(q)
= \sum\limits_{j=0}^{+\infty} q^{j}\left(\sum\limits_{k=0}^{j} (-i)^k 2^{-\frac{j}{2}} a_{j-k,k} \right)
= \sum\limits_{j=0}^{+\infty} q^{j} b_{j} ,
$$
where the coefficients $b_{j}= \sum\limits_{k=0}^{j} (-i)^k 2^{-\frac{j}{2}} a_{j-k,k}$ belong to $\C_i$.
This shows that $\mathcal{I}^\nu(\mathcal{F}^{2,\nu}(\C^2) ) \subset \mathcal{F}_{slice,i}^{2,\nu}(\Hq) $.
For the inverse inclusion, let $F \in \mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ and let $f\inL^{2,\nu}(\R,\Hq)$ such that $F=\mathcal{B}^\nu_\Hq f$.
Now, since $F(\C_i)\subset \C_i$ we get $f_0\inL^{2,\nu}(\R,\C)$ and therefore $f_0 = \mathcal{R}^\nu F_0$ for some $F_0\in \mathcal{F}^{2,\nu}(\C^2)$. Thus,
$F = \mathcal{B}^\nu_\Hq \circ \mathcal{R}^\nu F_0 = \mathcal{I}^\nu F_0$.
The formula \eqref{Ikernel} for arbitrary fixed $(z,w)\in \C^2$ immediately follows from the identity principle (Lemma \ref{IdentityPrinciple}) for the left slice regular functions. Indeed, the left slice regular functions \begin{align}
q \longmapsto \mathcal{I}^\nu(K^\nu_2(\cdot,(z,w)))(q) = Ext\left( \xi \longmapsto K^\nu_2\left(\left(\frac{\xi}{\sqrt{2}},-\frac{i\xi}{\sqrt{2}}\right),(z,w) \right)\right)(q) \end{align} and \begin{align}
q \longmapsto K^\nu_{\Hq}\left(q,\frac{z+iw}{\sqrt{2}}\right) = \left(\frac{\nu}{\pi}\right) e_{*}^{\nu\left[q,\frac{\overline{z+iw}}{\sqrt{2}}\right]} \end{align} coincides on the slice $\C_i$ and therefore their difference is identically zero on the whole $\Hq$. \end{proof}
\begin{rem} For $F(q) =\sum_{m=0}^{+\infty} q^{m} c_m\in \mathcal{F}_{slice,i}^{2,\nu}(\Hq)$, i.e., with
$c_m \in \C_i$ and $\sum_{m=0}^{+\infty}\frac{m!}{\nu^m}|c_{m}|^{2}<+\infty $, then the function $f_0 = \mathcal{R}^\nu F_0$ involved in the above proof is given by $$ f_0(x) = \sum_{m=0}^{+\infty} \frac{\norm{e_m}_{L^{2,\nu}(\C,\C)}}{\norm{H^\nu_{m}}_{L^{2,\nu}(\R,\C)}} c_m H^\nu_{m}(x)\in L^{2,\nu}(\R,\C).$$
Moreover, we have $\norm{f_0}_{L^{2,\nu}(\R,\C)} = \norm{F}_{L^{2,\nu}(\C,\C)}$. \end{rem}
The last result of this section concerns the following integral transform $$\mathcal{J}^{\nu}:=\mathcal{G}^{\nu} \circ (\mathcal{B}^\nu_\Hq)^{-1}$$
mapping $\mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ into the two-dimensional Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C^2)$ and suggested by the commutative diagram
\[ \xymatrix{ \mathcal{F}_{slice,i}^{2,\nu}(\Hq)\ar[r]^{\mathcal{J}^{\nu}} \ar[d]_{(\mathcal{B}^\nu_\Hq)^{-1}}& \mathcal{A}^{2,\nu}(\C^2)\\ L^{2,\nu}(\R,\C)\ar[ru]_{\mathcal{G}^\nu} }. \]
\begin{thm} The image of $\mathcal{J}^{\nu}$ coincides with $ \mathcal{A}^{2,\nu}(\C^2)$ in \eqref{Image}, and its action on any $f\in\mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ is given by \begin{align}\label{actionFv}
\mathcal{J}^{\nu} F(z,w)= \left(\frac{\nu}{\pi}\right)^{\frac 12} F\left(\frac{z+iw}{\sqrt{2}}\right).
\end{align} Moreover, for every fixed $\xi\in \C$, we have \begin{align}\label{Ckernel} \mathcal{J}^{\nu} \left( K^\nu_{\Hq}(\cdot,\xi) \right)(z,w) = K^\nu_2 \left(\left(\frac{\xi}{\sqrt{2}},\frac{-i\xi}{\sqrt{2}}\right),(z,w)\right)
\end{align} where $K^\nu_{\Hq}(q,\xi)$ and $K^\nu_2 \left((u,v),(z,w)\right)$ are the reproducing kernel of $\mathcal{F}_{slice}^{2,\nu}(\Hq)$ and $\mathcal{F}^{2,\nu}(\C^2)$ given by \eqref{RpKernelH} and \eqref{Rpkernel2} respectively. \end{thm}
\begin{proof} Below, we identify $\C$ and $\C_i$. The restriction of $(\mathcal{B}^\nu_\Hq)^{-1}$ to $\mathcal{F}_{slice,i}^{2,\nu}(\Hq)$ has as image $L^{2,\nu}(\R,\C)$ which is contained in $L^{2,\nu}(\R,\Hq)$. This readily follows by proceeding in a similar way as in Theorem \ref{MainThm2} since the rescaled Hermite polynomials $H_m^\nu$ is an orthogonal basis of $L^{2,\nu}(\R,\C)$. Thus, by Theorem \ref{MainThm2}, we obtain $$ \mathcal{G}^{\nu} \circ (\mathcal{B}^\nu_\Hq)^{-1} ( \mathcal{F}_{slice,i}^{2,\nu}(\Hq)) = \mathcal{G}^{\nu}(L^{2,\nu}(\R,\C)) = \mathcal{A}^{2,\nu}(\C^2).$$ This can also be reproved since \begin{align}\label{Cbasis} \mathcal{J}^{\nu}(e_m)(z,w) = \left(\frac{z+iw}{\sqrt{2}}\right)^m = e_m(z,w) \end{align} which immediately follows from the formula \eqref{Ckernel},
whose the proof can be handled by direct computation. Indeed, for given $F\in \mathcal{F}_{slice,i}^{2,\nu}(\Hq)$, we have $F(\C_i)\subset \C_i$ and $(\mathcal{B}^\nu_\Hq)^{-1}F = (\mathcal{B}^{1,\nu})^{-1} F_i $, where $(\mathcal{B}^{1,\nu})^{-1}$ is the inverse of the one-dimensional Segal-Bargmann transform and $F_i$ is the restriction of $F$ to the slice $\C_i$. Then, the proof is completed making use of the definition of $\mathcal{G}^\nu f(z,w) = \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \mathcal{C}_{\psi_1} (\mathcal{B}^{1,\nu} f)(z,w)$. \end{proof}
\begin{rem} The restriction of $\mathcal{I}^{\nu}= \mathcal{B}^\nu_\Hq \circ \mathcal{R}^{\nu}$ to $ \mathcal{A}^{2,\nu}(\C^2)$ is the inverse of $\mathcal{J}^{\nu}:=\mathcal{G}^{\nu} \circ (\mathcal{B}^\nu_\Hq)^{-1}$ for satisfying $\mathcal{J}^{\nu} \circ \mathcal{I}^{\nu} = Id_{\mathcal{A}^{2,\nu}(\C^2)}.$ \end{rem}
In the next section, we investigate further properties of the integral transform $\mathcal{G}^\nu$ when combined with the Fourier transform and connecting one and two-dimensional Bargmann-Fock spaces. We also discuss possible generalization to $d$-complex space $\C^d$.
\section{Appendix}
We consider the rescaled Fourier transform $\widetilde{\mathcal{F}}^\nu_{\mp} $ defined on $L^{2,\nu}(\R,\C)$ by
$\widetilde{\mathcal{F}}^\nu_{\mp} = \mathcal{M}_{\nu/2} \mathcal{F}^\nu_{\mp} \mathcal{M}_{-\nu/2} $, where $ \mathcal{M}_\alpha$ denotes the ground state transform $\mathcal{M}_\alpha f:=e^{-\alpha\vert{z}\vert^2}f$, and $\mathcal{F}^\nu $ is the standard Fourier transform on $L^{2,0}(\R,\C)=L^2(\R,dx)$ with
$$
\mathcal{F}^\nu_{\mp} (\varphi)(x):= \left(\frac{\nu }{2\pi}\right)^{\frac 12} \int_{\R} \varphi(u)e^{{\mp} \nu ixu}dx.$$
More explicitly, $\widetilde{\mathcal{F}}^\nu_{\mp} $ acts on $L^{2,\nu}(\R,\C)$ as a bounded linear operator by
\begin{eqnarray}\label{Fourier}
\widetilde{\mathcal{F}}^\nu_{\mp} (\varphi) (x)
:= \left(\frac{\nu }{2\pi}\right)^{\frac 12} \int_{\R} \varphi (u) e^{\frac{\nu}{2}(x {\mp} iu)^2} d\lambda(u).
\end{eqnarray}
Thanks to the well-known Plancherel-theorem, it turns out that the Fourier transform $\widetilde{\mathcal{F}}^\nu_{\mp}$ maps unitary $L^{2,\nu}(\R,\C)$ onto itself.
Accordingly, we can consider the following commutative diagrams
$$\xymatrix{
\mathcal{F}^{2,\nu}(\C) \ar[r]^{\mathcal{T}^\nu_{1,\mp}} \ar[d]_{(\mathcal{B}^{1,\nu})^{-1}} & \mathcal{A}^{2,\nu}(\C^2) \\ L^{2,\nu}(\R,\C) \ar[r]_{\widetilde{\mathcal{F}}^\nu_{\mp} } & L^{2,\nu}(\R,\C) \ar[u]_{\mathcal{G}^\nu}
}
\quad \mbox{and} \quad \xymatrix{
\mathcal{F}^{2,\nu}(\C^2) \ar[r]^{\mathcal{T}^\nu_{2,\mp}} \ar[d]_{\mathcal{R}^{\nu}} & \mathcal{F}^{2,\nu}(\C) \\ L^{2,\nu}(\R,\C) \ar[r]_{\widetilde{\mathcal{F}}^\nu_{\mp}} & L^{2,\nu}(\R,\C) \ar[u]_{\mathcal{B}^{1,\nu}} } .
$$
The transform
$\mathcal{T}^\nu_{1,\mp}:=\mathcal{G}^\nu \circ \widetilde{\mathcal{F}}^\nu_{\mp} \circ(\mathcal{B}^{1,\nu})^{-1}
$
(resp. $ \mathcal{T}^\nu_{2,\mp} :=\mathcal{B}^{1,\nu} \circ \widetilde{\mathcal{F}}^\nu _{\mp} \circ \mathcal{R}^\nu $) maps
$\mathcal{F}^{2,\nu}(\C)$ onto $\mathcal{A}^{2,\nu}(\C^2)$ (resp. $\mathcal{F}^{2,\nu}(\C^2)$ onto $\mathcal{F}^{2,\nu}(\C)$). Their explicit formulas reduced further to elementary composition operators involving the symbols $ {\psi_1}(z,w) = \frac{z+iw}{\sqrt{2}}$ and $ {\psi_2}(\xi) = \frac{1}{\sqrt{2}}(\xi,-i\xi)$, and the reducible representation of the unitary group $U(1):=\{\theta \in \C; \, |\theta |=1 \}$ defined by $\Gamma_\theta \varphi (\xi) := \varphi (\theta \xi)$.
\begin{thm} \label{thmFourier}
The action of $\mathcal{T}^\nu_{1,\mp}$ and $\mathcal{T}^\nu_{2,\mp}$ are given respectively by \begin{align}\label{Fouc1}
\mathcal{T}^\nu_{1,\mp} = \mathcal{B}^{2,\nu}|_{\mathcal{F}^{2,\nu}(\C )} \circ \Gamma_{{\mp} i} =\left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \mathcal{C}_{{\mp}i\psi_1} \end{align} on $\mathcal{F}^{2,\nu}(\C )$, and \begin{align}\label{Fouc2} \mathcal{T}^\nu_{2,\mp} = \Gamma_{{\mp}i} \circ Proj\circ (\mathcal{B}^{2,\nu})^{-1} = \left(\frac{\pi}{\nu}\right)^{\frac{1}{2}} \mathcal{C}_{{\mp}i{\psi_2}} \end{align} on $\mathcal{F}^{2,\nu}(\C^2)$. Moreover, we have $\mathcal{T}^\nu_{2,\mp}\circ \mathcal{T}^\nu_{1,\pm}= Id_{\mathcal{F}^{2,\nu}(\C )}$ and $\mathcal{T}^\nu_{2,\mp}\circ \mathcal{T}^\nu_{1,\mp}=\Gamma_{-1} Id_{\mathcal{F}^{2,\nu}(\C )}$.
\end{thm}
\begin{proof}
Recall first that the expression of $(\mathcal{B}^{1,\nu})^{-1}$ given by
$$(\mathcal{B}^{1,\nu})^{-1} f (x)= \scal{f,S^\nu(\cdot,x) }_{L^{2,\nu}(\C,\C)} ,$$
where $S^\nu$ is the kernel function associated to the rescaled Hermite polynomials $H^\nu_m$ and given by \eqref{Genfct1}.
Therefore, by Fubini's theorem, we get
$$ \widetilde{\mathcal{F}}^\nu \circ(\mathcal{B}^{1,\nu})^{-1} (f)(x) =
\left(\frac{\nu }{2\pi}\right)^{\frac 12} \int_{\C} f(\xi) \left( \int_\R e^{\frac{\nu}{2}(x {\mp} iu)^2} S^\nu(\overline{\xi},u) du\right) e^{-\nu|\xi|^2}d\lambda(u) .$$
Straightforwardly, we obtain
$$ \left(\frac{\nu }{2\pi}\right)^{\frac 12}\int_\R e^{\frac{\nu}{2}(x {\mp} iu)^2} S^\nu(\zeta,u) du= S^\nu({\mp} i\zeta,x).$$
Hence \begin{align}\label{FouB} \widetilde{\mathcal{F}}^\nu \circ (\mathcal{B}^{1,\nu})^{-1} f (x)= \scal{\Gamma_{{\mp}i}f,S^\nu(\cdot,x) }_{L^{2,\nu}(\C,\C)} = (\mathcal{B}^{1,\nu})^{-1} \circ \Gamma_{{\mp}i}f (x).
\end{align}
Consequently, the transform $\mathcal{T}^\nu_{1,\mp} = \mathcal{G}^\nu \circ \widetilde{\mathcal{F}}^\nu_{\mp} \circ(\mathcal{B}^{1,\nu})^{-1}$ reduces further to
$$\mathcal{T}^\nu_{1,\mp} f(z,w) = \mathcal{B}^{2,\nu}\circ \mathcal{B}^{1,\nu} ( (\mathcal{B}^{1,\nu})^{-1} \circ \Gamma_{{\mp}i}f)(z,w)
= \mathcal{B}^{2,\nu} f({\mp}iz,{\mp}iw)$$
by means of Theorem \ref{MainThm}, as well as to \begin{align*} \mathcal{T}^\nu_{1,\mp}
= \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \mathcal{C}_{\psi_1} \circ \mathcal{B}^{1,\nu} (\mathcal{B}^{1,\nu})^{-1} \circ \Gamma_{{\mp}i}
= \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \Gamma_{{\mp}i}\circ \mathcal{C}_{\psi_1}
= \left(\frac{\nu}{\pi}\right)^{\frac{1}{2}} \mathcal{C}_{{\mp}i\psi_1} \end{align*} on $\mathcal{F}^{2,\nu}(\C)$. Moreover, by Theorem \ref{thmLinverse} and \eqref{FouB}, the action of $\mathcal{T}^\nu_{2,\mp} :=\mathcal{B}^{1,\nu} \circ \widetilde{\mathcal{F}}^\nu \circ \mathcal{R}^\nu$ on $\mathcal{F}^{2,\nu}(\C^2)$ reads \begin{align*} \mathcal{T}^\nu_{2,\mp} = \left(\frac{\pi}{\nu}\right)^{\frac{1}{2}} \mathcal{B}^{1,\nu} \circ \widetilde{\mathcal{F}}^\nu_{\mp} \circ (\mathcal{B}^{1,\nu})^{-1} \mathcal{C}_{{\psi_2}}
=\left(\frac{\pi}{\nu}\right)^{\frac{1}{2}} \Gamma_{{\mp}i} \circ \mathcal{C}_{{\psi_2}}
=\left(\frac{\pi}{\nu}\right)^{\frac{1}{2}} \mathcal{C}_{{\mp}i{\psi_2}} . \end{align*} We also have \begin{align*} \mathcal{T}^\nu_{2,\mp} &:= \mathcal{B}^{1,\nu} \circ \widetilde{\mathcal{F}}^\nu_{\mp} \circ \mathcal{R}^\nu \\& =\mathcal{B}^{1,\nu} \circ \widetilde{\mathcal{F}}^\nu_{\mp} \circ (\mathcal{B}^{1,\nu})^{-1}\circ Proj\circ (\mathcal{B}^{2,\nu})^{-1} \\& = \Gamma_{{\mp}i} \circ Proj\circ (\mathcal{B}^{2,\nu})^{-1} . \end{align*} Finally, from \eqref{Fouc1} and \eqref{Fouc2}, we obtain \begin{align*} \mathcal{T}^\nu_{2,\mp} (\mathcal{T}^\nu_{1,\mp} f)(\xi) &= \mathcal{C}_{{\mp}i{\psi_2}} (\mathcal{C}_{{\mp}i{\psi_1}} f)(\xi) \\&= \mathcal{C}_{{\mp}i{\psi_1}} f \left(\frac{\mp i \xi}{\sqrt 2}, \frac{{\mp}\xi}{\sqrt 2}\right)
\\&= f(-\xi) \end{align*} as well as \begin{align*} \mathcal{T}^\nu_{2,\mp} (\mathcal{T}^\nu_{1,\pm} f)(\xi) & = \mathcal{C}_{{\mp}i{\psi_2}} (\mathcal{C}_{{\pm}i{\psi_1}} f)(\xi) \\&= \mathcal{C}_{{\pm}i{\psi_1}} f \left(\frac{\mp i \xi}{\sqrt 2}, \frac{{\mp}\xi}{\sqrt 2}\right)
\\&= f(\xi). \end{align*}
\end{proof}
We conclude this paper by discussing the generalization to $d$-complex space $\C^d$. This is possible for $d=2^k$ by considering the integral transform $\mathcal{G}^\nu_k$ mapping isometrically the standard Hilbert space $L^{2,\nu}(\R,\C))$ into the Bargmann-Fock space $\mathcal{F}^{2,\nu}(\C^{2^k})$ defined by induction $$\mathcal{G}^\nu_k:=
\mathcal{B}^{2^{k},\nu} \circ \mathcal{B}^{2^{k-1},\nu} \circ \cdots \circ \mathcal{B}^{2,\nu} \circ \mathcal{B}^{1,\nu}. $$ We claim that for every $f\in L^{2,\nu}(\R,\C)$ and $Z=(z_1, \cdots ,z_{2^k})\in \C ^{2^k}$ we have $$\mathcal{G}^\nu_k f(Z)= c_{2^{k}}^\nu \mathcal{C}_{{\psi_k}}\mathcal{B}^{1,\nu} f(Z)= c_{2^{k}}^\nu \mathcal{B}^{1,\nu} f({\psi_k}(Z))$$
where $\mathcal{C}_{{\psi_k}}$ denotes the composition operator with the special symbol
$${\psi_k}(Z):= \frac{1}{2^{\frac{k}{2}}}\sum_{m=0}^{2^{k-1}-1}i^m(z_{2m+1}+iz_{2m+2}).$$ The computations hold true for $k=1$ and $k=2$.
\end{document} | arXiv |
Earth, Planets and Space
Paleomagnetic studies on single crystals separated from the middle Cretaceous Iritono granite
Frontier letter
Chie Kato ORCID: orcid.org/0000-0001-8603-37591,2,
Masahiko Sato ORCID: orcid.org/0000-0002-2475-39423,
Yuhji Yamamoto ORCID: orcid.org/0000-0001-9163-03394,
Hideo Tsunakawa ORCID: orcid.org/0000-0002-6628-91642 &
Joseph L. Kirschvink ORCID: orcid.org/0000-0001-9486-66895,6
Earth, Planets and Space volume 70, Article number: 176 (2018) Cite this article
Investigations of superchrons are the key to understanding long-term changes of the geodynamo and the mantle's controlling role. Granitic rocks could be good recorders of deep-time geomagnetic field behavior, but paleomagnetic measurements on whole-rock granitic samples are often disturbed by alterations like weathering, and the presence of multi-domain magnetite. To avoid such difficulties and test the usefulness of single silicate crystal paleomagnetism, here we report rock-magnetic and paleomagnetic properties of single crystals and compare those to the host granitic rock. We studied individual zircon, quartz and plagioclase crystals separated from the middle Cretaceous Iritono granite, for which past studies have provided tight constraints on the paleomagnetism and paleointensity. The occurrence of magnetite was very low in zircon and quartz. On the other hand, the plagioclase crystals contained substantial amounts of fine-grained single-domain to pseudo-single-domain magnetite. Microscopic features and distinctive magnetic behavior of plagioclase crystals indicate that the magnetite inclusions were generated by exsolution. We therefore performed paleointensity experiments by the Tsunakawa–Shaw method on 17 plagioclase crystals. Nine samples passed the standard selection criteria for reliable paleointensity determinations, and the mean value obtained was consistent with the previously reported whole-rock paleointensity value. The virtual dipole moment was estimated to be higher than 8.9 ± 1.8 × 1022 Am2, suggesting that the time-averaged field strength during middle of the Cretaceous normal superchron was several times as large as compared to that of non-superchron periods. Single plagioclase crystals which have exsolved magnetite inclusions can be more suitable for identification of magnetic signals and interpretation of paleomagnetic records than the conventional whole-rock samples or other silicate grains.
Long superchrons of constant geomagnetic polarity are the most distinctive features at the ~ 10 Myr scale trend of the geomagnetic field and are very possibly related to the whole-mantle convection process such as the activity of mantle plumes (e.g. Larson and Olson 1991; Glatzmaier et al. 1999; Courtillot and Olson 2007; Zhang and Zhong 2011; Biggin et al. 2012). Numerical dynamo simulations indicate that non-reversing stable dynamos with strong dipole moments will occur under conditions of relatively low CMB heat flow, whereas a reversing dynamo with multipolar nature is expected under conditions with high CMB heat flow (e.g. Kutzner and Christensen 2002; Christensen and Aubert 2006; Olson and Christensen 2006). Additionally, high dipole fields could be also caused by enhanced heterogeneity of CMB heat flow (e.g. Takahashi et al. 2008; Olson et al. 2010).
Understanding the geomagnetic field intensity during superchrons is crucial for revealing the nature of the long-term change of the geodynamo and the controlling role of the mantle on it. While dynamo simulations predict stronger field during superchrons, paleointensity during the Cretaceous normal superchron (CNS) at 83–120 Ma has not reached consensus among previous paleomagnetic studies. Several studies suggest stronger field during CNS than the average for ages with frequent reversals (e.g. Tarduno et al. 2006; Tauxe 2006), while others claimed the opposite (e.g. Tanaka and Kono 2002; Shcherbakova et al. 2012). It is also challenging to get a solid conclusion from the paleomagnetic databases such as PINT (Biggin et al. 2009) and MagIC (earthref.org/MagIC) because the data deposited in them are mostly from volcanic rocks which reflect short-term geomagnetic variations and hide the long-term trends of paleointensity. In order to establish a reliable paleointensity curve for long-term variation, a brand new dataset based on appropriate samples and measurement methods is required.
To focus on the long-term variations of the past geomagnetic field, plutonic rocks could provide good candidate samples since they are likely to record the time-averaged field accurately during their long cooling history. Granites in particular have been formed at various ages and have preserved over geological time. However, paleomagnetic studies using granitic rocks are usually difficult due to weathering of the rocks and non-ideality of coarse grain multi-domain (MD) magnetite. Also, granitic rocks often contain biotite and pyrrhotite, which decompose easily upon laboratory heating and form some magnetite as a byproduct. One of the most promising approaches to overcome these difficulties is to separate single silicate crystals from them that contain magnetic mineral inclusions, and use them for paleomagnetic measurements.
Single silicate crystals with magnetic inclusions have the potential to yield reliable paleointensity data because the inclusions are more protected from chemical alternations such as oxic weathering in nature, and from thermochemical oxidation upon laboratory heating, than are the host granitic rocks. Zircon crystals have been used for paleointensity studies owing to its permanence against chemical alternation and ability to obtain direct radiometric ages on them (Tarduno et al. 2014, 2015), although Weiss et al. (2015) raised a controversy. Detailed rock-magnetic properties of zircons collected from river sand in the Tanzawa pluton, Japan, showed its adequacy for paleointensity measurements (Sato et al. 2015). Fu et al. (2017) reported that the absolute value of zircon paleointensity was consistent with the bulk rock using the Bishop Tuff of Northeastern California. Paleointensity and rock-magnetic properties have also been intensively studied on single plagioclase crystals which can contain magnetically stable fine-grained magnetite inclusions (Tarduno et al. 2006). For basalts from the 1955 Kilauea eruption, the recovered paleointensity has been compared with the whole-rock and magnetic observatory data, with good agreement (Cottrell and Tarduno 1999). Plagioclase crystals separated from lavas from the Rajmahal Traps (113–116 Ma; Tarduno et al. 2001), Strand Fiord Formation (95 Ma; Tarduno et al. 2002), Ocean Drilling Program (ODP) Site 1205 on Nintoku Seamount of the Hawaiian-Emperor volcanic chain and ODP Site 801 in the Pigafetta Basin (55.59 and 160 Ma, respectively; Tarduno and Cottrell 2005), and the Kiaman Reversed Superchron type area (~ 262–318 Ma; Cottrell et al. 2008) have been used for studies on paleointensity variation related to the reversal frequency of the dipole field. Quartz phenocrysts are also a target studied for Archean rocks (Tarduno et al. 2007, 2010, 2014). All of the paleointensity measurements mentioned above were taken using variants of the Thellier–Thellier method (Thellier and Thellier 1959; Coe 1967; Yu et al. 2004).
Rock-magnetic properties of plagioclase separated from plutonic rocks such as granitoids (Usui et al. 2015) and gabbros (Feinberg et al. 2005; Muxworthy and Evans 2012) have also been reported. Some of them are characterized by needle-shaped tiny magnetite inclusions possibly formed by exsolution from the host plagioclase (Feinberg et al. 2005; Usui et al. 2015; Wenk et al. 2011). Several preceding studies reported paleointensity estimates using plutonic rocks in which the authors argued that the magnetic records were carried by exsolved magnetite (Selkin et al. 2008; Usui 2013). Plagioclase with exsolved magnetite is potentially an excellent recording medium of the ancient geomagnetic field, but should be treated carefully because (1) magnetic remanence anisotropy caused by needle-shaped magnetite can affect paleomagnetic results (Paterson 2013; Usui et al. 2015), (2) nonlinear thermoremanence acquisition (Selkin et al. 2007), and (3) unknown formation temperature of exsolved magnetite (Feinberg et al. 2005). Usui and Nakamura (2009) reported paleointensity using single plagioclase crystals separated from a granitic rock, although they did not claim they achieved exact, reliable estimates. Despite its potential for establishing the long-term trend of the geomagnetic field strength, paleointensity of single crystals separated from plutonic rocks have not been compared to that of the host whole rock to assess its reliability.
This study aims to assess how paleointensity measurements on single silicate crystals separated from granitic rocks are reliable compared to those on whole-rock samples. We conducted systematic rock-magnetic measurements on zircon, quartz and plagioclase grains separated from whole-rock samples collected from the Cretaceous Iritono granite, a paleomagnetically well-studied unit in northeast Japan. The results suggest that plagioclase is the most appropriate candidate mineral for paleointensity measurements among the studied minerals. We therefore conducted paleointensity experiments on plagioclase and compared the results with previously published results from the host granitic rock. Paleointensity experiments were conducted by the Tsunakawa–Shaw method (Tsunakawa and Shaw 1994; Yamamoto et al. 2003; Mochizuki et al. 2004; Yamamoto and Tsunakawa 2005; Yamamoto et al. 2015) which might be more suitable for single grain samples with exsolved magnetite than the variants of the Thellier–Thellier method. Obtained paleointensity results were consistent with the whole-rock data; thus it is suggested that plagioclase crystals separated from granitic rock have a potential to constrain the long-term variation of paleointensity.
We studied zircon, quartz and plagioclase separated from the middle Cretaceous Iritono granite in the Abukuma massif, northeast Japan (Fig. 1). Cooling history of the Iritono granite is constrained by Wakabayashi et al. (2006) and Tsunakawa et al. (2009) using a thermal diffusion model of the granite body, and by two radiometric age determinations with different closure temperatures. The U–Pb zircon age is 115.7 ± 1.9 Ma (Tsunakawa et al. 2009) and the 40Ar–39Ar biotite age is 101.9 ± 0.2 Ma (Wakabayashi et al. 2006). The age of the Iritono granite corresponds to the middle part of the CNS, in which the polarity reversals had been stopped for a period as long as 20 Myr. The estimated cooling time for a lock-in of a paleomagnetic record is 4 × 104 to 1.4 × 107 years. Wakabayashi et al. (2006) and Tsunakawa et al. (2009) previously conducted rock-magnetic and paleomagnetic studies and paleointensity experiments on the whole-rock samples of the Iritono granite. The magnetic minerals in the Iritono granite were magnetite and pyrrhotite, and their fraction varied with sampling locations. Samples from site ITG09 showed the least contribution of pyrrhotite, and the primary magnetization was clearly distinguished from the secondary magnetization carried by low blocking temperature or low coercivity components in terms of natural remanent magnetization (NRM) direction. Tsunakawa et al. (2009) studied the paleointensity by both the Coe's version of Thellier method (Thellier and Thellier 1959; Coe 1967) and the Tsunakawa–Shaw method using the whole-rock sample of site ITG09. Although the obtained paleointensities exhibit a bimodal distribution according to the different methods used, they were indistinguishable at the 2σ level and thus were combined into one site-mean. The resultant site-mean paleointensity was 58.4 ± 7.3 μT before applying the cooling rate correction, and 39.0 ± 4.9 μT after correction. This corresponds to a virtual dipole moment (VDM) of 9.1 ± 1.1 × 1022 Am2. The present study used the mineral samples separated from the core samples of site ITG09.
Map of the Iritono granite in the Abukuma massif, northeast Japan (reproduced from Wakabayashi et al. 2006)
A granite sample core 2.54 cm in diameter was crushed with a non-magnetic mortar and pestle, and sorted by 850 μm and 350 μm mesh screens. Heavy fractions of the sample smaller than 350 μm were concentrated by an aqueous panning technique. Zircons with no visible cracks or opaque particles on the surface were hand-picked under a binocular stereoscopic microscope. Quartz and plagioclase were hand-picked from samples larger than 350 μm and smaller than 850 μm. These selected crystals were leached by hydrochloric acid (HCl) to remove any tiny magnetic particles on the sample surface. HCl concentration and leaching duration was 12 N and 4 days for zircon and quartz, and 6 N and 8 h for plagioclase, respectively. Samples were then sandwiched individually between layers of magnetically clean Scotch Magic Transparent Tape in the method of Sato et al. (2015) or were mounted individually on a glass holder (see below) for rock-magnetic measurements and paleointensity experiments.
Remanence measurements using SQUID magnetometer
A superconducting quantum interference device (SQUID) magnetometer (2G Enterprises Model 755-4.2 cm) was used for remanence measurements. We followed the method of single-crystal measurements by Sato et al. (2015). A sample holder made of acrylonitrile butadiene styrene (ABS) was used for measurements. Single-crystal samples sandwiched by tape or mounted on the glass holder were fixed on the edge of the ABS holder by double-stick tape. The magnetic moments of the ABS holder and double-stick tape were measured before and after sample measurement and subtracted from the sample moment. The detection limit of the method was 2 × 10−12 Am2, so we employed 4 × 10−12 Am2 as a threshold to distinguish significant remanence intensity from noise.
For stepwise thermal demagnetization (ThD) and paleointensity experiments, we made new thermally resistant holders for single-crystal measurement of the SQUID magnetometer based on the sample holder designed for SQUID microscope measurements (Fu et al. 2017). Images of the sample holder are shown in Fig. 2. Non-alkali high-temperature glass plates (Eagle XG, Corning, 1.1 mm thick) were cut into squares of 7 mm on a side. A 1-mm-diameter pit was drilled in the center of the glass plate, followed by intense cleaning in concentrated HCl. A single-crystal sample was put into the pit and fixed by stuffing SiO2 powder with grain size of ~ 0.8 μm. This technique enabled us to conduct heating experiments on single crystals in a fixed sample coordinate. The blank magnetic moment of the glass holder after subtracting the moment of ABS holder and double-stick tape was well below the practical detection limit of the SQUID magnetometer.
Image of the glass holder fixed on the ABS holder. 7 mm each side. A plagioclase grain can be seen through the glass in the center
First, we measured NRM intensity of 349, 455, and 268 grains for zircon, quartz, and plagioclase, respectively. On the basis of the NRM intensities, we then selected samples for further rock-magnetic and paleomagnetic measurements.
Rock-magnetic measurements
For the selected samples that showed significant NRM intensity (> 4 × 10−12 Am2 per grain), we conducted low-temperature remanence measurements using a magnetic property measurement system (Quantum Design model MPMS-XL5). Isothermal remanent magnetization (IRM) was first imparted at 2.5 T and 10 K after zero-field cooling from 300 K. The remanence was then measured during warming in zero-field (ZFC remanence). Subsequently, samples were cooled to 10 K in a 2.5 T field and then remanence was further measured during warming in zero-field (FC remanence).
Hysteresis loop measurements were taken for plagioclase grains and a quartz grain which contained magnetite using an alternating gradient magnetometer (LakeShore model MicroMag 2900). Samples sandwiched by tape were mounted on a transducer probe with a silica sample stage (Lake Shore model P1 probe). The blank saturation magnetization of the probe was 6 × 10−10 Am2. Maximum field during hysteresis loop measurement was 0.5 T, and the field increment was 4 mT. Diamagnetic/paramagnetic corrections were applied to the obtained hysteresis loop by subtracting the average slopes at applied field of |B| > 300 mT. Results are exhibited in the Day plot (Day et al. 1977).
Stepwise ThD of NRM was performed on selected zircon and quartz samples using a TDS-1 thermal demagnetizer (Natsuhara Giken). For plagioclase samples, stepwise ThD of laboratory-imparted thermoremanent magnetization (TRM) was performed after paleointensity experiments. TRM was given by cooling from 610 °C in a 50 μT field in air.
To investigate the NRM to IRM (NRM/IRM) distribution of plagioclase, a room-temperature IRM was imparted to 75 plagioclase samples at 2 T by an MMPM10 pulse magnetizer (Magnetic Measurements), and the IRM intensity was measured using the SQUID magnetometer.
Paleointensity experiments
We performed paleointensity experiments with the Tsunakawa–Shaw method on 17 plagioclase grains. We followed the procedures described in Yamamoto and Tsunakawa (2005). In this method, stepwise alternating field demagnetization (AFD) of NRM and TRM is performed. Assuming the similarity of TRM and anhysteretic remanent magnetization (ARM), alternation caused by laboratory heating is monitored by comparing the coercivity spectra of ARMs before heating (ARMbefore) and after (ARMafter). TRM is corrected by:
$$ {\text{TRM}}^{*} = {\text{TRM}} \times {\text{ARM}}_{\text{before}} /{\text{ARM}}_{\text{after}} $$
where TRM* is the corrected TRM. Paleointensity is determined using the slope in the TRM*–NRM diagram. The samples are heated twice. Assuming that thermal alternation in the first and second heating is similar, the validity of the ARM correction for alternation was checked by the second heating by comparing the measured intensity to the laboratory field. Samples are subjected to low-temperature demagnetization (LTD) before each stepwise AFD series to selectively demagnetize the unstable coarse grain magnetite. LTD treatment was conducted by cooling a sample in a dewar bottle inside a triple magnetically shielded case filled with liquid nitrogen for 5 min. TRM was given by cooling from 610 °C in a 50 μT field in a vacuum (< 10 Pa). The heating time at the top temperature of 610 °C was 10 (20) minutes for first (second) heating, with a subsequent cooling to room temperature with a rate of approximately 10 °C per minute. AFD treatment and ARM impartment was carried out using an alternating field demagnetizer (Natsuhara Giken model DEM-95C). AFD was conducted during sample tumbling. ARM was imparted at DC field of 50 μT, with a peak AC field of 180 mT. Here we define ARM0, ARM1 and ARM2 as the ARM imparted before heating, after first heating and after second heating, respectively. Also, TRM1 and TRM2 are given by the first and second heating, respectively. The corrected TRMs, TRM1* and TRM2* are given by TRM1 × ARM0/ARM1 and TRM2 × ARM1/ARM2, respectively. Paleointensity value is calculated by the slope of the NRM–TRM1* plot. The field intensity calculated from the slope of the TRM1–TRM2* plot is compared to the laboratory field intensity. We attempted to deal with the anisotropy effect on paleointensity by two experimental protocols. For four samples (sample IDs of 9004, 9009, 9013 and 9016), all ARMs and TRMs were given along the likely direction of the characteristic remanent magnetization (ChRM), estimated from the orthogonal plot of AFD of the NRM, so that the anisotropy bias was canceled. For the others we followed the standard protocol of the Tsunakawa–Shaw method in which ARM0 is approximately parallel to ChRM and ARM1 (ARM2) is parallel to TRM1 (TRM2). This protocol employs a built-in anisotropy correction using ARMs (Yamamoto et al. 2015); the anisotropy bias caused by angular difference between NRM (TRM1) and TRM1 (TRM2) is corrected by ratios of ARM0/ARM1 (ARM1/ARM2). In the present study, ARM1, ARM2, TRM1 and TRM2 were imparted along the Y axis, which is independent of the direction of ChRM. In this study, AFD steps for ARMs without LTD treatment (ARM00, ARM10, and ARM20) were omitted.
Remanence anisotropy measurements
To assess anisotropy effect, we measured the ARM anisotropy of 19 plagioclase samples including 13 samples which were subjected to the paleointensity experiments. ARM was imparted along three orthogonal axes (ARMx, ARMy, and ARMz) to obtain the remanence anisotropy tensor. Measurements were taken after LTD treatments. TRM anisotropy tensor was also measured after paleointensity experiments for some samples and the consistency with the ARM anisotropy tensor was checked. The ARM and TRM measurements were also taken after AFD with a peak AC field of 50 mT. ARM anisotropy of whole-rock samples was checked based on measurement results obtained using a spinner magnetometer (Natsuhara Giken model SMD88).
Rock-magnetic properties of zircon
Sixteen out of 349 zircon samples had NRM intensities larger than the threshold (Fig. 3a). Low-temperature magnetometry and stepwise ThD measurements of NRM were taken on selected samples that had significant NRM intensity. Representative results are summarized in Fig. 4. Stepwise ThD treatment for NRM was performed on four samples. Two samples showed a characteristic magnetization component and pyrrhotite-like blocking temperature (Fig. 4a), but the other two did not show any stable remanence component (Fig. 4b). Low-temperature experiments were performed on additional five samples. One sample showed a phase transition of pyrrhotite at ~ 30 K (Fig. 4c), while the other four samples did not show any obvious transition (Fig. 4d). We concluded that the dominant magnetic inclusion in zircon is pyrrhotite and/or magnetically very soft materials. Since the whole-rock study determined that a low blocking temperature (< 400 °C) component is probably carried by pyrrhotite and hence was most likely remagnetized by a reheating event (Wakabayashi et al. 2006), we did not use zircon for paleointensity experiments.
Histogram of NRM intensity of a zircon, b quartz and c plagioclase. Dotted lines indicate the threshold for significant remanence. Inset figures in a and b show the histogram of NRM intensity above the threshold (4 × 10−12 Am2)
Results of experiments on selected zircon. a, b Stepwise ThD of NRM. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. c, d Low-temperature remanence measurements. Solid lines for ZFC measurements and dotted lines for FC measurements
Rock-magnetic properties of quartz
Similar to the zircon samples, very few samples of quartz (7 out of 455) had NRM intensities larger than the threshold (Fig. 3b). We took stepwise ThD measurements of NRM on two quartz grains. In both samples, magnetization decreased generally toward the origin in the orthogonal plot. One sample shows a high blocking temperature suggesting magnetite inclusion (Fig. 5a) and the other sample exhibits a lower, pyrrhotite-like blocking temperature (Fig. 5b). We further took low-temperature magnetometry measurements and hysteresis loop measurements on the former sample. The Verwey transition of magnetite was recognized near 120 K (Fig. 5c) that indicate titanium-poor magnetite (Özdemir et al. 1993; Moskowitz et al. 1998). High coercivity (Bc> 10 mT) exhibited in the slope-corrected hysteresis loop suggests the existence of fine-grained magnetite. We concluded that quartz is a potentially ideal sample for paleomagnetic study. However, we decided not to use quartz for paleointensity experiments because it was difficult to find enough magnetite bearing samples to study.
Results of experiments on selected quartz. a, b Stepwise ThD of NRM. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. c Low-temperature remanence measurements. Solid lines for ZFC measurements and dotted lines for FC measurements. d Hysteresis loop after slope correction. The slope correction coefficient was + 74.8 nAm2/T
Rock-magnetic properties of plagioclase
A histogram of NRM intensity for the plagioclase crystals is shown in Fig. 3c. In contrast to the zircon and quartz samples, very high population of the plagioclase samples (224 out of 268; 84%) exhibited the significant NRM intensities. Figure 6a shows a diagram between NRM intensities and IRM intensities for the 75 plagioclase grains. NRM/IRM distributes in a narrow range, namely around 0.1 (Fig. 6b), indicating the similar magnetic carrier and NRM origin among the plagioclase grains. The NRM/IRM ratio around 0.1 is higher than the TRM(50 μT)/IRM ratio for synthetic samples which resemble rocks (Yu 2010), and consistent with, but somewhat lower than the previous reports on plagioclase crystals (Usui et al. 2015) or rocks containing exsolved magnetite (Selkin et al. 2007).
Results of experiments on plagioclase. a NRM intensity plotted as a function of IRM intensity. Horizontal and vertical dashed lines indicate the threshold for significant remanence. b Histogram of NRM intensity divided by IRM intensity. c Representative hysteresis loop after slope correction. The slope correction coefficient was + 61.4 nAm2/T. d Day plot of quartz and plagioclase grains shown with previous reports by Wakabayashi et al. (2006). Closed symbols represent results of this study. Reversed triangle indicates quartz, and circle indicates plagioclase. Opened symbols represent results of Wakabayashi et al. (2006). Triangle, circle and square indicate non-separated chips, feldspar fraction and biotite fraction, respectively. Dotted lines show the SD-MD magnetite mixture trend after Channell and McCabe (1994) and Parry (1982). e Low-temperature remanence measurements. Solid lines for ZFC measurements and dotted lines for FC measurements. f Stepwise ThD of TRM given at 50 μT compared to that of TRM1 of the whole-rock sample ITG09b-34-1 (Wakabayashi et al. 2006). Dashed lines for plagioclase grains and solid line for whole rock. Solid gray line indicates TRM value before ThD
We performed magnetic hysteresis and low-temperature magnetometry measurements on four selected grains with different NRM/IRM ratios. All of the four samples exhibited similar features in both hysteresis and low-temperature magnetization. Results on the hysteresis measurements fall in the PSD region of a Day plot (Fig. 6d) and are concentrated in a narrower region of the diagram compared to the whole rock. This indicates that the magnetite in plagioclase crystals has narrower range of grain size than that in the whole rock. The Verwey transition of magnetite was clearly observed at approximately 120 K (Fig. 6e), indicating a very low titanium content of magnetite (Özdemir et al. 1993; Moskowitz et al. 1998). Larger remanence in the FC curve relative to the ZFC curve (Fig. 6e) also suggests a dominance of fine-grained magnetite (Moskowitz et al. 1993; Carter-Stiglitz et al. 2001, 2002; Kosterov 2003).
After paleointensity measurements, we took hysteresis measurements on four samples and low-temperature measurements on one sample. The results were similar to those shown in Fig. 6c, e, which implies that the double heating during paleointensity measurements did not severely affect the magnetic characteristics of plagioclase grains. A distribution of the blocking temperature was investigated on four samples after paleointensity experiments. Results show a very narrow blocking temperature distribution around 530–580 °C (Fig. 6f).
Figure 7 is a microscopic image of a polished plagioclase sample (sample no. 68 in Table 2). The tiny opaque minerals with rounded to needle-like shapes are uniformly distributed in the host plagioclase and showed no association with cracks. These opaque minerals are probably magnetite, and the texture implies that the magnetites were not generated by secondary alternation but rather have a primary origin such as incorporation during plagioclase crystallization or exsolution at subsolidus conditions. Furthermore, the needle-like shape of magnetite and their preferred alignment relative to the feldspar suggests an origin via exsolution because magnetite tends to form equant octahedral crystals when crystallizing from a magma. The needle-like grains could be categorized to SD state due to particle length (few micron in most but up to > 10 um) and width length ratio of < 0.1 (Dunlop and Özdemir 1997). The round-shaped grains are possibly in the PSD state.
Microscopic image of a polished single plagioclase crystal. Stacking of snaps of different focal depths
To summarize, rock-magnetic measurements of plagioclase samples indicate that the plagioclase crystals contain nearly-pure needle-like-shaped SD and PSD magnetite with width less than a few micron and various aspect ratio and are suitable for paleointensity measurements.
Paleointensity experiments of plagioclase
We conducted Tsunakawa–Shaw paleointensity experiments on 17 plagioclase grains. In consideration of the sensitivity of the instrument, samples with NRM intensities larger than 5 × 10−11 Am2 were chosen for the experiments. Taking weak remanences of single-crystal samples into account, we employed a slightly different selection criteria from the study by Yamamoto and Tsunakawa (2005) which worked on strong remanences of volcanic whole rocks. The criteria we adopted are:
A primary component found in an orthogonal plot of NRM demagnetization
f > 0.3 in a NRM–TRM1* plot
R > 0.90 in a NRM–TRM1* plot
Slope of a TRM1–TRM2* plot within 1 ± 0.1
R > 0.95 in a TRM1–TRM2* plot
where f is the NRM fraction of the primary component and R is the correlation coefficients. Primary components were identified on an orthogonal plot of NRM, and it associated with MAD values < 16°. By the LTD treatment, 5–15% of the NRM was demagnetized. The demagnetized components by LTD are attributed to the remanence carried by PSD magnetite (Heider et al. 1992). By the above criteria, 9 out of 17 results were selected: two results were rejected by the criteria of 1 and/or 3; six results were rejected due to the criterion of 4. Typical examples of successful and rejected results are, respectively, presented in Figs. 8, 9 and 10. Results of all 17 samples are summarized in Table 1. Paleointensities obtained from the nine results range between 43.1 and 77.9 μT, yielding an average of 57.4 μT and a standard deviation of 11.8 μT. Figure 11 shows an individual plagioclase paleointensity as well as their average together with the average paleointensity reported from the whole rocks. The plagioclase average is in good agreement with the average paleointensity reported from the whole rocks, though the dispersion of plagioclase paleointensity is slightly larger than that of the whole rocks.
A representative result of successful paleointensity measurements by the Tsunakawa–Shaw method on single plagioclase grain. The dotted line indicates where the horizontal and vertical axes are equal. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively
Example of failed paleointensity measurements by the Tsunakawa–Shaw method on single plagioclase grain. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. TRM1/TRM2* slope severely exceeds 1
Example of failed paleointensity measurements by the Tsunakawa–Shaw method on single plagioclase grain. In the orthogonal plot, open and closed circles indicate X–Y and X–Z planes, respectively. No linear portion in the NRM/TRM1* plot
Table 1 Results on Paleointensity measurements of plagioclase samples
Summary of paleointensity and error (1σ) of plagioclase compared with the whole rock. Black circles indicate results of each plagioclase grain. Green circle denotes the mean of nine plagioclase grains. Red square marks the mean of whole-rock measurements (Tsunakawa et al. 2009)
Two protocols were employed for handling the anisotropy bias on paleointensity ("Paleointensity experiments" section). In both protocols, it was technically difficult to impart ARM0 accurately parallel to ChRM. Therefore, the anisotropy bias on each sample would not be corrected completely. The angular differences between ChRM and ARM0 were 24° at most. The possible canceling of anisotropy bias by averaging a number of samples is discussed in "Anisotropy effect on paleointensity" section. The protocol in which ARM1, ARM2, TRM1 and TRM2 were given along the Y axis seems to be more reproducible for the present sample configuration, though the number of studied samples was not enough to determine which protocol was more suitable.
We found that ARM was larger than TRM in all plagioclase samples, in contrast to the whole-rock sample. This has been reported as a peculiar feature of exsolved magnetite by Usui et al. (2015).
Remanence anisotropy of plagioclase
ARM anisotropy tensors were estimated from the measured ARMx, ARMy, and ARMz for each plagioclase grain. Eigenvalues and anisotropy parameters, corrected anisotropy degree Pj, and shape factor Tj (Jelinek 1981) were calculated (Table 2). Positive and negative values of Tj indicate that the shape of the anisotropy ellipsoids is oblate and prolate, respectively. Median of Pj is 3.35 and Tj varies from + 0.72 to − 0.80. Typical results of analysis on anisotropy directions and anisotropy parameters are shown in Fig. 12. The directions of the anisotropy axes are identical for ARM and TRM, and do not change by AFD. Hence, we used the ARM anisotropy tensor as a proxy for TRM anisotropy tensor in the discussion in "Anisotropy effect on paleointensity" section. Pj increased after AFD, which could be reasonable considering that grains with high aspect ratio correspond to high coercivity components. The whole-rock ARM was isotropic (Pj = 1.2, Tj = − 0.28).
Table 2 Anisotropy parameters of plagioclase samples
Representative results on the anisotropy axes measurement on single plagioclase crystals. Sample holder coordinate (not oriented). Left side, sample with prolate anisotropy; right side, sample with oblate anisotropy
Anisotropy effect on paleointensity
Our paleointensity value could be either larger or smaller than the true value depending on angles between the anisotropy axes and directions of ancient or laboratory fields (Paterson 2013). In our paleointensity experiments, anisotropy bias is mainly caused by the directional difference between the external field that gave the NRM, and the laboratory field that gave the ARM0. Since the whole-rock sample was isotropic, the plagioclase grains should be randomly oriented in the host rock assuming that the ARM of whole rock is mainly carried by plagioclase-hosted magnetite. Therefore, the direction of the external field which gave NRM should be random against the anisotropy axes of each plagioclase grain. We calculated the anisotropy bias between two remanence vectors given by randomly oriented external fields with same intensity using the typical anisotropy tensor with eigenvalues (w1, w2, w3) = (1.56, 0.90, 0.49), assuming that the laboratory field was also randomly oriented against the anisotropy axes. Figure 13 shows the anisotropy bias (ratio of the intensity of two remanence vectors) as a function of the angular difference between two remanence vectors (NRM and ARM0). In the present study, we obtained paleointensity results from nine plagioclase samples; the angular difference between NRM and ARM0 for each sample was below 25°. In this condition, the anisotropy bias averaged for nine samples was within 1 ± 0.1, and the standard deviation was below 25%. Therefore, anisotropy bias is likely canceled by averaging paleointensity results from nine samples in this study. Also, the variation of the experimental results (~ 20% of the mean value) was consistent with our calculation. We concluded that accurate paleointensity information can be derived from the mean paleointensity values of an assembly of single plagioclase crystals, while the large dispersion of paleointensity values is constitutional for the randomly oriented anisotropic grain assemblage.
Anisotropy bias calculated from the typical anisotropy tensor of plagioclase samples as a function of angle between two remanence vectors (here assumed to be NRM and ARM0). An anisotropy bias larger than 1 indicates that the paleointensity is overestimated, and vice versa
Based on anisotropy measurements of plagioclase crystals separated from an Archean granitoid, Usui et al. (2015) demonstrated that (1) geometric mean instead of arithmetic mean should be used, and (2) tens of crystals would be needed to achieve reliable paleointensity estimates. In the present study, the geometric mean and the arithmetic mean are similar in the range of standard deviation, so fewer crystals are required. This difference can be attributed to the variation of the anisotropy effect; the anisotropy tensor they used for estimates was more anisotropic (corresponding to Pj = 6.21 and Tj = 0.34) than that used in the present study (corresponding to Pj = 3.18 and Tj = 0.04). Since the shape and fabric of exsolved magnetite vary among samples, the anisotropy effect and how to get rid of it need to be studied carefully for each rock.
In addition to remanence anisotropy, nonlinear TRM acquisition is a major issue of exsolved magnetite paleomagnetism. In the case of the studied sample, the NRM/IRM ratio was lower than the TRM/IRM ratio of previously studied plagioclase crystals (Usui et al. 2015) and rocks containing exsolved magnetite (Selkin et al. 2007). This implies that the nonlinear TRM acquisition may be insignificant for the obtained paleointensity range (~ 60 μT). Also, there is a possibility that NRM of exsolved magnetite is a thermochemical remanent magnetization (TCRM) rather than a TRM since the formation temperature of exsolved magnetite in plagioclase is not clear (Feinberg et al. 2005). In that case, obtained paleointensity could give the lower limit of the field strength at the age, as TCRM acquisition is less efficient than TRM acquisition (Stacey and Banerjee 1974; Usui and Nakamura 2009).
Comparison of magnetic carriers of plagioclase and whole-rock samples
Wakabayashi et al. (2006) and Tsunakawa et al. (2009) predicted that most of the stable remanence of the Iritono granite was carried by magnetite inclusions in plagioclase. However, detailed rock-magnetic experiments on plagioclase grains compared to the previously reported whole-rock studies revealed that the distribution of blocking temperatures and grain sizes are different between plagioclase crystals and the whole rock. The pTRM distributions do not show any concentration in a particular temperature interval below 550 °C for plagioclase samples, while about 10% of the whole-rock TRM is carried by a low blocking temperature (300–500 °C) component. Because the Iritono granite contains magnetite and pyrrhotite ("Rock-magnetic properties of zircon, Rock-magnetic properties of quartz, Rock-magnetic properties of plagioclase" sections; Wakabayashi et al. 2006; Tsunakawa et al. 2009), the low blocking temperature component above 350 °C found in the whole-rock TRM can be attributed to coarse-grained PSD and MD magnetites. Hysteresis loop measurements also indicate that magnetite in plagioclase has a narrow range of grain size compared to the whole rock (Fig. 6d). In addition, the results of the whole-rock experiments exhibit a bimodal distribution according to different paleointensity methods which suggest the influence of non-ideal magnetic minerals, and alternation of such minerals which could not be detected or suppressed completely. On the other hand, plagioclase samples mostly contain nearly-pure, fine-grained magnetite as the magnetic carriers.
Nevertheless, the magnetic carrier of plagioclase crystals and the whole rock was different in terms of distribution of grain size and blocking temperatures, and the estimated paleointensity was consistent among them. Therefore, we conclude that a reliable paleointensity was obtained successfully. Because of the more 'ideal' magnetic carrier, paleointensity experiments on single plagioclase with exsolved magnetite inclusions can potentially give more reliable and informative paleointensity results than the conventional whole-rock experiments.
Effect of cooling rate on paleointensity
The extremely slow cooling of granitic rocks compared to laboratory timescales may require a correction to the paleointensity estimate due to the time dependence of the acquisition of TRM, which varies by size and aspect ratio of the magnetic grains (Halgedahl et al. 1980; Selkin et al. 2000; Yu 2011). Results on magnetic hysteresis measurements of plagioclase grains are plotted in the PSD region on the Day plot (Fig. 6d), which could be interpreted as a mixture of a range of grain size and aspect ratio. The stable remanence that is involved in paleointensity measurements is carried by SD to PSD magnetite. Based on SD theory (Halgedahl et al. 1980; Selkin et al. 2000) and the estimated cooling time of the Iritono granite body, Tsunakawa et al. (2009) argued that the ratio of TRM in nature to TRM in laboratory would be 1.5 for the SD components. On the other hand, PSD grains have insignificant cooling rate dependence on TRM acquisition (Yu 2011). The cooling rate corrected paleointensity value of 38.2 ± 7.9 μT assuming SD magnetite gives the lower limit of the paleointensity, since the PSD magnetite should give higher paleointensity value. Therefore, corresponding cooling rate corrected VDM value of 8.9 ± 1.8 × 1022 Am2 using the paleointensity value of plagioclase crystals and inclination of the H component of the whole rock in Wakabayashi et al. (2006) can impose a constraint on the lower limit of paleointensity at the age of 115 Ma.
Significance of Shaw-type paleointensity methods on single crystals
This is the first report applying the Tsunakawa–Shaw paleointensity method to single grain samples. Considering that several results were rejected because of severe alternation during laboratory heating, a Shaw-type method, in which number of heating in laboratory are minimized, seems to be more appropriate than a Thellier-type method. Furthermore, the ThD curves of plagioclase crystals (Fig. 6f) show a very narrow distribution of blocking temperature below the Curie temperature of magnetite (530–580 °C), while the AFD curve (Top-right diagram in Fig. 8) show broad distribution of coercivity (50–150 mT). This emphasizes an advantage to estimate a paleointensity not in a blocking temperature space (by a Thellier-type method) but in a coercivity space (by a Shaw-type method), especially for a magnetically weak sample such as single crystals. Thus the Tsunakawa–Shaw method could be more suitable than the Thellier–Thellier methods in the case of plagioclase sample containing exsolved magnetite, while these methods should be compared in the future paleointensity study using appropriate samples.
Paleointensity during middle CNS
Considering the possible TCRM origin of NRM and the contribution of PSD grains on the cooling rate correction, the VDM value of 8.9 ± 1.8 × 1022 Am2 gives the lower limit of the time-averaged field strength during the middle age of CNS. Average field strength of the periods of frequent reversals have been estimated as the VDM value of the past 5 million years from the Society Islands volcanic rocks (3.6× 1022 Am2; Yamamoto and Tsunakawa 2005), and the virtual axial dipole moment (VADM) value of 0–160 Ma excluding CNS period from submarine basalt glass samples (4.8× 1022 Am2; Tauxe 2006). The present result suggests that the time-averaged field strength during middle CNS was several times as large as that of non-superchron periods, supporting the prediction by dynamo models and simulations (e.g. Larson and Olson 1991; Glatzmaier et al. 1999; Kutzner and Christensen 2002; Christensen and Aubert 2006; Olson and Christensen 2006; Courtillot and Olson 2007; Takahashi et al. 2008; Olson et al. 2010). By applying the present paleointensity method to various granitic rocks from different ages, we may be able to improve our understanding of the long-term behavior of the geomagnetic field concerning the mantle convection process without the complications of unideal magnetic minerals that often compromise such work.
We have evaluated the utility of using single silicate crystals separated from granitic rocks in the exploration the long-term evolution of the intensity of the geomagnetic field. We studied the rock-magnetic properties of zircon, quartz and plagioclase separated from the Iritono granite whose paleointensity was already well constrained by past studies using whole-rock samples. In our samples we found that plagioclase was the most suitable mineral phase to study, which was more reliably and stably magnetic than other minerals like zircon or quartz. We conducted paleointensity experiments on 17 plagioclase grains using the Tsunakawa–Shaw method. Nine samples were successful and gave mean paleointensity values of 57.4 ± 11.8 μT. This value is consistent with the previously reported whole-rock paleointensity, suggesting that an assembly of single plagioclase crystals separated from a granitic rock has the ability to yield the accurate paleointensity data. Considering the unknown forming temperature of exsolved magnetite and cooling rate effect on TRM acquisition, time-averaged VDM is estimated to be higher than 8.9 ± 1.8× 1022 Am2 at the age of 115 Ma, suggesting high dipole strength during the middle age of CNS.
Biggin AJ, Strik GH, Langereis CG (2009) The intensity of the geomagnetic field in the late-Archaean: new measurements and an analysis of the updated IAGA palaeointensity database. Earth Planets Space 61(1):9–22. https://doi.org/10.1186/BF03352881
Biggin AJ, Steinberger B, Aubert J, Suttie N, Holme R, Torsvik TH, van der Merr DG, Van Hinsbergen DJJ (2012) Possible links between long-term geomagnetic variations and whole-mantle convection processes. Nat Geosci 5(8):526–533. https://doi.org/10.1038/ngeo1521
Carter-Stiglitz B, Moskowitz B, Jackson M (2001) Unmixing magnetic assemblages and the magnetic behavior of bimodal mixtures. J Geophys Res Solid Earth 106(B11):26397–26411. https://doi.org/10.1029/2001JB000417
Carter-Stiglitz B, Jackson M, Moskowitz B (2002) Low-temperature remanence in stable single domain magnetite. Geophys Res Lett 29(7):33-1. https://doi.org/10.1029/2001GL014197
Channell JET, McCabe C (1994) Comparison of magnetic hysteresis parameters of unremagnetized and remagnetized limestones. J Geophys Res Solid Earth 99(B3):4613–4623. https://doi.org/10.1029/93JB02578
Christensen UR, Aubert J (2006) Scaling properties of convection driven dynamos in rotating spherical shells and application to planetary magnetic fields. Geophys J Int 166:97–114. https://doi.org/10.1111/j.1365-246X.2006.03009.x
Coe RS (1967) Determination of paleo-intensities of the Earth's magnetic field with emphasis on mechanisms which could cause non-ideal behaviour in Thellier's method. J Geomagn Geoelectr 19:157–179
Cottrell RD, Tarduno JA (1999) Geomagnetic paleointensity derived from single plagioclase crystals. Earth Planet Sci Lett 169(1):1–5. https://doi.org/10.1016/S0012-821X(99)00068-0
Cottrell RD, Tarduno JA, Roberts J (2008) The Kiaman Reversed Polarity Superchron at Kiama: toward a field strength estimate based on single silicate crystals. Phys Earth Planet Inter 169(1):49–58. https://doi.org/10.1016/j.pepi.2008.07.041
Courtillot V, Olson P (2007) Mantle plumes link magnetic superchrons to Phanerozoic mass depletion events. Earth Planet Sci Lett 260(3):495–504. https://doi.org/10.1016/j.epsl.2007.06.003
Day R, Fuller M, Schmidt VA (1977) Hysteresis properties of titanomagnetites: grain-size and compositional dependence. Phys Earth Planet Int 13(4):260–267. https://doi.org/10.1016/0031-9201(77)90108-X
Dunlop DJ, Özdemir Ö (1997) Rock magnetism—fundamentals and frontiers. Cambridge University Press, Cambridge
Book Google Scholar
Feinberg JM, Scott GR, Renne PR, Wenk HR (2005) Exsolved magnetite inclusions in silicates: features determining their remanence behavior. Geology 33(6):513–516. https://doi.org/10.1130/G21290.1
Fu RR, Weiss BP, Lima EA, Kehayias P, Araujo JFDF, Glenn DR, Gelb J, Einsle JF, Bauer AM, Harrison RJ, Ali GAH, Walsworth RL (2017) Evaluating the paleomagnetic potential of single zircon crystals using the Bishop Tuff. Earth Planet Sci Lett 458:1–13. https://doi.org/10.1016/j.epsl.2016.09.038
Glatzmaier GA, Coe RS, Hongre L, Roberts PH (1999) The role of the Earth's mantle in controlling the frequency of geomagnetic reversals. Nature 401(6756):885–890. https://doi.org/10.1038/44776
Halgedahl SL, Day R, Fuller M (1980) The effect of cooling rate on the intensity of weak-field TRM in single-domain magnetite. J Geophys Res Solid Earth 85(B7):3690–3698. https://doi.org/10.1029/JB085iB07p03690
Heider F, Dunlop DJ, Soffel HC (1992) Low-temperature and alternating field demagnetization of saturation remanence and thermoremanence in magnetite grains (0.037 μm to 5 mm). J Geophys Res Solid Earth 97(B6):9371–9381. https://doi.org/10.1029/91jb03097
Jelinek V (1981) Characterization of the magnetic fabric of rocks. Tectonophysics 79(3–4):T63–T67. https://doi.org/10.1016/0040-1951(81)90110-4
Kosterov A (2003) Low-temperature magnetization and AC susceptibility of magnetite: effect of thermomagnetic history. Geophys J Int 154(1):58–71. https://doi.org/10.1046/j.1365-246X.2003.01938.x
Kutzner C, Christensen U (2002) From stable dipolar towards reversing numerical dynamos. Phys Earth Planet Int 121:29–45. https://doi.org/10.1016/S0031-9201(02)00016-X
Larson RL, Olson P (1991) Mantle plumes control magnetic reversal frequency. Earth Planet Sci Lett 107(3–4):437–447. https://doi.org/10.1016/0012-821X(91)90091-U
Mochizuki N, Tsunakawa H, Oishi Y, Wakai S, Wakabayashi KI, Yamamoto Y (2004) Palaeointensity study of the Oshima 1986 lava in Japan: implications for the reliability of the Thellier and LTD-DHT Shaw methods. Phys Earth Planet Inter 146(3):395–416. https://doi.org/10.1016/j.pepi.2004.02.007
Moskowitz BM, Frankel RB, Bazylinski DA (1993) Rock magnetic criteria for the detection of biogenic magnetite. Earth Planet Sci Lett 120(3–4):283–300. https://doi.org/10.1016/0012-821X(93)90245-5
Moskowitz BM, Jackson M, Kissel C (1998) Low-temperature magnetic behavior of titanomagnetites. Earth Planet Sci Lett 157:141–149. https://doi.org/10.1016/S0012-821X(98)00033-8
Muxworthy AR, Evans ME (2012) Micromagnetics and magnetomineralogy of ultrafine magnetite inclusions in the Modipe Gabbro. Geochem Geophys Geosyst 14(4):921–928. https://doi.org/10.1029/2012GC004445
Olson P, Christensen UR (2006) Dipole moment scaling for convection-driven planetary dynamos. Earth Planet Sci Lett 250:561–571. https://doi.org/10.1016/j.epsl.2006.08.008
Olson PL, Coe RS, Driscoll PE, Glatzmaier GA, Roberts PH (2010) Geodynamo reversal frequency and heterogeneous core–mantle boundary heat flow. Phys Earth Planet Int 180(1–2):66–79. https://doi.org/10.1016/j.pepi.2010.02.010
Özdemir Ö, Dunlop DJ, Moskowitz BM (1993) The effect of oxidation on the Verwey transition in magnetite. Geophys Res Lett 20(16):1671–1674. https://doi.org/10.1029/93GL01483
Parry LG (1982) Magnetization of immobilized particle dispersions with two distinct particle sizes. Phys Earth Planet Int 28(3):230–241. https://doi.org/10.1016/0031-9201(82)90004-8
Paterson GA (2013) The effects of anisotropic and non-linear thermoremanent magnetizations on Thellier-type paleointensity data. Geophys J Int 193(2):694–710. https://doi.org/10.1093/gji/ggt033
Sato M, Yamamoto S, Yamamoto Y, Okada Y, Ohno M, Tsunakawa H, Maruyama S (2015) Rock-magnetic properties of single zircon crystals sampled from the Tanzawa tonalitic pluton, central Japan. Earth Planets Space 67(1):150. https://doi.org/10.1186/s40623-015-0317-9
Selkin PA, Gee JS, Tauxe L, Meurer WP, Newell AJ (2000) The effect of remanence anisotropy on paleointensity estimates: a case study from the Archean Stillwater Complex. Earth Planet Sci Lett 183(3):403–416. https://doi.org/10.1016/S0012-821X(00)00292-2
Selkin PA, Gee JS, Tauxe L (2007) Nonlinear thermoremanence acquisition and implications for paleointensity data. Earth Planet Sci Lett 256(1):81–89. https://doi.org/10.1016/j.epsl.2007.01.017
Selkin PA, Gee JS, Meurer WP, Hemming SR (2008) Paleointensity record from the 2.7 Ga Stillwater Complex, Montana. Geochem Geophys Geosyst. https://doi.org/10.1029/2008gc001950
Shcherbakova VV, Bakhmutov VG, Shcherbakov VP, Zhidkov GV, Shpyra VV (2012) Palaeointensity and palaeomagnetic study of Cretaceous and Palaeocene rocks from Western Antarctica. Geophys J Int 189(1):204–228. https://doi.org/10.1111/j.1365-246X.2012.05357.x
Stacey FD, Banerjee SK (1974) The physical principles of rock magnetism. Elsevier, New York
Takahashi F, Tsunakawa H, Matsushima M, Mochizuki N, Honkura Y (2008) Effects of thermally heterogeneous structure in the lowermost mantle on the geomagnetic field strength. Earth Planet Sci Lett 272(3):738–746. https://doi.org/10.1016/j.epsl.2008.06.017
Tanaka H, Kono M (2002) Paleointensities from a Cretaceous basalt platform in Inner Mongolia, northeastern China. Phys Earth Planet Int 133(1):147–157. https://doi.org/10.1016/S0031-9201(02)00091-2
Tarduno JA, Cottrell RD, Smirnov AV (2001) High geomagnetic intensity during the mid-Cretaceous from Thellier analyses of single plagioclase crystals. Science 291(5509):1779–1783. https://doi.org/10.1126/science.1057519
Tarduno JA, Cottrell RD, Smirnov AV (2002) The Cretaceous superchron geodynamo: observations near the tangent cylinder. Proc Natl Acad Sci 99:14020–14025. https://doi.org/10.1073/pnas.222373499
Tarduno JA, Cottrell RD, Smirnov AV (2006) The paleomagnetism of single silicate crystals: recording geomagnetic field strength during mixed polarity intervals, superchrons, and inner core growth. Rev Geophys. https://doi.org/10.1029/2005rg000189
Tarduno JA, Cottrell RD, Watkeys MK, Bauch D (2007) Geomagnetic field strength 3.2 billion years ago recorded by single silicate crystals. Nature 446(7136):657–660. https://doi.org/10.1038/nature05667
Tarduno JA, Cottrell RD, Watkeys MK, Hofmann A, Doubrovine PV, Mamajek EE, Liu D, Sibeck DG, Neukirch LP, Usui Y (2010) Geodynamo, solar wind, and magnetopause 3.4 to 3.45 billion years ago. Science 327(5970):1238–1240. https://doi.org/10.1126/science.1183445
Tarduno JA, Blackman EG, Mamajek EE (2014) Detecting the oldest geodynamo and attendant shielding from the solar wind: Implications for habitability. Phys Earth Planet Int 233:68–87. https://doi.org/10.1016/j.pepi.2014.05.007
Tarduno JA, Cottrell RD, Davis WJ, Nimmo F, Bono RK (2015) A Hadean to Paleoarchean geodynamo recorded by single zircon crystals. Science 349(6247):521–524. https://doi.org/10.1126/science.aaa9114
Tauxe L (2006) Long-term trends in paleointensity: the contribution of DSDP/ODP submarine basaltic glass collections. Phys Earth Planet Inter 156(3):223–241. https://doi.org/10.1016/j.pepi.2005.03.022
Thellier E, Thellier O (1959) Sur I'intensite du champ magnetique terrestre dans le passe historique et geologique. Ann Geophys 15:285–376
Tsunakawa H, Shaw J (1994) The Shaw method of palaeointensity determinations and its application to recent volcanic rocks. Geophys J Int 118(3):781–787. https://doi.org/10.1111/j.1365-246X.1994.tb03999.x
Tsunakawa H, Wakabayashi KI, Mochizuki N, Yamamoto Y, Ishizaka K, Hirata T, Takahashi F, Seita K (2009) Paleointensity study of the middle Cretaceous Iritono granite in northeast Japan: implication for high field intensity of the Cretaceous normal superchron. Phys Earth Planet Int 176(3):235–242. https://doi.org/10.1016/j.pepi.2009.07.001
Usui Y (2013) Paleointensity estimates from oceanic gabbros: effects of hydrothermal alteration and cooling rate. Earth Planets Space 65(9):985–996. https://doi.org/10.5047/eps.2013.03.015
Usui Y, Nakamura N (2009) Nonlinear thermoremanence corrections for Thellier paleointensity experiments on single plagioclase crystals with exsolved magnetites: a case study for the Cretaceous Normal Superchron. Earth Planets Space 61(12):1327–1337. https://doi.org/10.1186/BF03352985
Usui Y, Shibuya T, Sawaki Y, Komiya T (2015) Rock magnetism of tiny exsolved magnetite in plagioclase from a Paleoarchean granitoid in the Pilbara craton. Geochem Geophys Geosyst 16(1):112–125. https://doi.org/10.1002/2014GC005508
Wakabayashi KI, Tsunakawa H, Mochizuki N, Yamamoto Y, Takigami Y (2006) Paleomagnetism of the middle Cretaceous Iritono granite in the Abukuma region, northeast Japan. Tectonophysics 421(1):161–171. https://doi.org/10.1016/j.tecto.2006.04.013
Weiss BP, Maloof AC, Tailby N, Ramezani J, Fu RR, Hanus V, Trail D, Watson EB, Harrison TM, Bowring SA, Kirschvink JL, Swanson-Hysell NL, Coe RS (2015) Pervasive remagnetization of detrital zircon host rocks in the Jack Hills, Western Australia and implications for records of the early geodynamo. Earth Planet Sci Lett 430:115–128. https://doi.org/10.1016/j.epsl.2015.07.067
Wenk HR, Chen K, Smith R (2011) Morphology and microstructure of magnetite and ilmenite inclusions in plagioclase from Adirondack anorthositic gneiss. Am Min 96(8–9):1316–1324. https://doi.org/10.2138/am.2011.3760
Tarduno JA, Cottrell RD (2005) Dipole strength and variation of the time-averaged reversing and nonreversing geodynamo based on Thellier analyses of single plagioclase crystals. J Geophys Res Solid Earth. https://doi.org/10.1029/2005jb003970
Yamamoto Y, Tsunakawa H (2005) Geomagnetic field intensity during the last 5 Myr: lTD-DHT Shaw palaeointensities from volcanic rocks of the Society Islands, French Polynesia. Geophys J Int 162(1):79–114. https://doi.org/10.1111/j.1365-246X.2005.02651.x
Yamamoto Y, Tsunakawa H, Shibuya H (2003) Palaeointensity study of the Hawaiian 1960 lava: implications for possible causes of erroneously high intensities. Geophys J Int 153(1):263–276. https://doi.org/10.1046/j.1365-246X.2003.01909.x
Yamamoto Y, Torii M, Natsuhara N (2015) Archeointensity study on baked clay samples taken from the reconstructed ancient kiln: implication for validity of the Tsunakawa–Shaw paleointensity method. Earth Planets Space 67(1):63. https://doi.org/10.1186/s40623-015-0229-8
Yu Y (2010) Paleointensity determination using anhysteretic remanence and saturation isothermal remanence. Geochem Geophys Geosyst. https://doi.org/10.1029/2009gc002804
Yu Y (2011) Importance of cooling rate dependence of thermoremanence in paleointensity determination. J Geophys Res Solid Earth. https://doi.org/10.1029/2011jb008388
Yu Y, Tauxe L, Genevey A (2004) Toward an optimal geomagnetic field intensity determination technique. Geochem Geophys Geosyst. https://doi.org/10.1029/2003gc000630
Zhang N, Zhong S (2011) Heat fluxes at the Earth's surface and core–mantle boundary since Pangea formation and their implications for the geomagnetic superchrons. Earth Planet Sci Lett 306(3–4):205–216. https://doi.org/10.1016/j.epsl.2011.04.001
YY and HT collected the samples. CK conducted the magnetic measurements. All contributed to discussion and writing the manuscript. All authors read and approved the final manuscript.
We thank Shinji Yamamoto for petrological discussions. The microscopic photograph of plagioclase sample (Fig. 7) was taken by Yujiro Tamura. We thank lead guest editor John Tarduno and two anonymous reviewers for their constructive comments. Rock- and paleomagnetic measurements were taken under the cooperative research program of Center for Advanced Marine Core Research (CMCR), Kochi University (Accept Nos. 16A009, 16B009, 17A028 and 17B028). This work was supported by the Japan Society for the Promotion of Science (JSPS) Research Fellowship for Young Scientists (DC1) No. 15J11812.
The data and materials used in this study are available on request to the corresponding author, Chie Kato ([email protected]).
This work was supported by the Japan Society for the Promotion of Science (JSPS) Research Fellowship for Young Scientists (DC1) No. 15J11812.
Department of Environmental Changes, Faculty of Social and Cultural Studies, Kyushu University, Fukuoka, Japan
Chie Kato
Department of Earth and Planetary Sciences, Tokyo Institute of Technology, Tokyo, Japan
Chie Kato & Hideo Tsunakawa
Department of Earth and Planetary Science, University of Tokyo, Tokyo, Japan
Masahiko Sato
Center for Advanced Marine Core Research, Kochi University, Kochi, Japan
Yuhji Yamamoto
Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, USA
Joseph L. Kirschvink
Earth-Life Science Institute, Tokyo Institute of Technology, Tokyo, Japan
Hideo Tsunakawa
Correspondence to Chie Kato.
Kato, C., Sato, M., Yamamoto, Y. et al. Paleomagnetic studies on single crystals separated from the middle Cretaceous Iritono granite. Earth Planets Space 70, 176 (2018). https://doi.org/10.1186/s40623-018-0945-y
Received: 03 July 2018
DOI: https://doi.org/10.1186/s40623-018-0945-y
Paleointensity
Single crystals
1. Geomagnetism
Recent Advances in Geo-, Paleo- and Rock- Magnetism
Frontier Letters | CommonCrawl |
A "1-expression" is a formula in which you add ($+$) or multiply ($\times$) the number 1 any number of times to create a natural number. Parentheses are allowed.
$1 + 1 + ((1 + 1 + 1 + 1) × (1 + 1 + 1 + 1 + 1)) = 22$.
This is a 1-expression with 11 times a 1 in it.
$1 + ((1 + 1 + 1) × (1 + ((1 + 1) × (1 + 1 + 1)))) = 22$.
This is a 1-expression with "1" only used ten times. Therefore, 10 is the minimum "1-value" of 22—that is, there is no 1-expression with which you can make 22 where you use a 1 less than 10 times.
Your task is to determine the minimum 1-value of 73.
$((1 + 1)*(1+1)*(1+1)*(1+1+1)*(1+1+1))+1$, for a total of 13 ones.
This was accomplished by multiplying together the prime factors of 72, and adding one.
Hugh and Keelhaul have already given correct solutions to the specific problem posed. If anyone wants to experiment with this, here's some fairly dumb Python code to find optimal solutions by brute force.
A few empirical observations: when n is composite the best solution is usually the product of solutions for some factors of n. For n=10, n=22, n=25, n=28, n=33 there are equally good solutions using factorizations of n-1 instead. I think n=46 is the first time it's strictly better not to factorize n: 2*(2+3*(1+2*3)) costs 13, while 1+3*3*5 costs only 12. I bet there are n for which the best solution is of form a*b+c*d, but after a small amount of experimenting I haven't found one yet.
What is the minimum number of matchsticks that you can use to build expressions having the following results? | CommonCrawl |
The operation $\dagger$ is defined as $\frac{m}{n}\dagger\frac{p}{q} = (m)(p)(\frac{q}{n}).$ What is the simplified value of $\frac{7}{12}\dagger\frac{8}{3}$?
We have $\frac{7}{12}\dagger\frac{8}{3}=(7)(8)\left(\frac{3}{12}\right)=(7)(2)=\boxed{14}$. | Math Dataset |
Highly Frustrated Magnetism (wHFM21)
waiting for the conference on
All times in the program are Central European Time (GMT/UTC +1).
WHFM21 preliminary program
scientific coordinator welcome
Chair: Gang Chen
08:00 - 09:20 Elsa Lhotel (CNRS - Institut Néel)
Tutorial talk: Thermodynamics measurements and slow dynamics in frustrated magnets
Chair: Laura Messio
09:30 - 09:55 Quentin Barthélemy (Université Paris-Saclay)
Gapless ground state in herbertsmithite and field-induced instability from \(^{17}O\) NMR
\noindent The kagome Heisenberg antiferromagnet decorated with quantum spins is a fascinating model to look for exotic quantum states including the elusive spin liquids. Despite the apparent simplicity of this model, there is still no consensus on the exact nature of the ground state and the excitation spectrum. Nevertheless, the current availability of quantum kagome materials, each with their own deviation to the pure Heisenberg model, allows to confront a fair number of experimental results to theoretical predictions and study the effect of perturbations inherent to real materials. I will focus on herbertsmithite ZnCu$_3$(OH)$_6$Cl$_2$. This is the emblematic compound in this family of materials because it is the closest realization to the pure Heisenberg model so far ($J\sim180$~K), with a nonmagnetic dynamical ground state.\\ \noindent I will present recent $^{17}$O NMR measurements on a high-quality single crystal of the local static spin susceptibility and spin dynamics (T$_1$ relaxation) at several field values [Ref. 1]. At variance with a previous study [Ref. 2], we used contrast methods to single out the contributions of oxygen sites close and far from interlayer copper defects. This allowed us to show unambiguously that the excitation spectrum is gapless, restoring some convergence with a Dirac cone model now advocated for in most of numerical works [Ref. 3 and Ref. 4]. While herbertsmithite displays a gapless spin liquid-like behavior in zero field, we confirmed the presence of a field-induced instability toward a spin-solid phase with largely suppressed moments $\lesssim0.1$~$\mu_\mathrm{B}$ and gapped excitations at sub-kelvin temperatures, which had already been observed on a polycrystalline sample [Ref. 5].\\ \noindent References: \begin{itemize} \item [Ref. 1] Khuntia et al., \emph{Nat. Phys.} 16, 469-474 (2020) \item [Ref. 2] Fu et al., \emph{Science} 350, 655-658 (2015) \item [Ref. 3] Ran et al., \emph{Phys. Rev. Lett.} 98, 117205 (2007) \item [Ref. 4] He et al., \emph{Phys. Rev. X} 7, 031020 (2017) \item [Ref. 5] Jeong et al., \emph{Phys. Rev. Lett.} 107, 237201 (2011) \end{itemize}
09:55 - 10:20 Jia-Wei Mei (Southern University of Science and Technology)
Dynamic fingerprint of fractionalized excitations in single-crystalline \(Cu_3Zn(OH)_6FBr \)
Beyond the absence of long-range magnetic orders, the most prominent feature of the elusive quantum spin liquid (QSL) state is the existence of fractionalized spin excitations. To study the spin dynamics, we perform Raman scattering on single crystals of the kagome QSL candidate Cu3Zn(OH)6FBr and the control compound EuCu3(OH)6Cl3 with an antiferromagnetic order. In Cu3Zn(OH)6FBr, the ideal kagome structure is confirmed down to low temperatures without any lattice distortion, and we observe a remarkable $E_{2g}$ Raman continuum in good agreement with the theoretical prediction for the kagome QSL. We identify a unique one spinon-antispinon pair continuum in Cu3Zn(OH)6FBr, in contrast to a sharp magnon peak in EuCu3(OH)6Cl3. The magnon peak emerges from the one-pair continuum and can be regarded as the spinon-antispinon bound state. The comparative studies demonstrate that the magnetic Raman continuum, in particular, the one-pair component is strong evidence for fractional spinon excitations in Cu3Zn(OH)6FBr.
10:20 - 10:45 Bernard Field (Monash University)
Kondo effect in a 2D kagome metal-organic framework on a metal
Kagome lattices can host frustrated magnetism and topological insulator phases. Two-dimensional (2D) metal-organic frameworks (MOFs) allow for the bottom-up synthesis of such lattices and fine-tuning their properties. We report the synthesis of a 2D MOF with copper atoms and dicyanoanthracene molecules on a silver Ag(111) surface. We observe the Kondo effect using scanning tunnelling spectroscopy, providing evidence of local magnetic moments in this system. From density functional theory and a mean-field Hubbard model, we find that electron-electron interactions drive magnetic ordering in this system rather than any intrinsic magnetic moments, providing evidence of strong electronic correlations. We also model the effects of other substrates on the magnetic and electronic properties of the MOF.
10:45 - 11:10 Pranay Patil (Laboratoire de Physique Theorique Toulouse)
Quantum half-orphans in kagome antiferromagnets
We numerically study the effects of non-magnetic impurities (vacancies) in the spin-S Heisenberg antiferromagnet on the kagomé lattice. For a range of low but nonzero temperatures, and spin values that extend down to S=2, we find that the magnetization response to an external magnetic field is consistent with the response of emergent "half-orphan" degrees of freedom that are expected to dominate the response of the corresponding classical magnet in a similar temperature range whenever there are two vacancies on the same triangle. Specifically, for all spin values we have considered (from S=1/2 to S=4), there is a large enhancement of the local susceptibility of the lone spin on such a triangle with two vacancies; in the presence of a uniform magnetic field h, this lone-spin behaves effectively as an almost free spin S in an effective field h/2. Quite remarkably, in the zero temperature limit, the ground state in the presence of a half-orphan has a non-zero total spin value SGS that shows a trend similar to S/2 when S≥2. These qualitative aspects of the response differ strikingly from the more conventional response of diluted samples without such half-orphan degrees of freedom. We discuss how these findings could be checked experimentally. arXiv:2009.01711
11:10 - 11:35 Ryutaro Okuma (Oxford University)
Magnon crystallization in kagome antiferromagnets
Geometrical frustration and a high magnetic field are keys to explore unconventional quantum states. In the spin-1/2 kagome antiferromagnet, several magnetization plateaus are predicted to occur. Interestingly, these plateaus can be described as crystals of "emergent magnons" i.e. magnons localized inside a hexagon. To observe crystallization of the emergent magnon, we performed Faraday rotation measurement of two kagome antiferromagnets up to 190 T. First in a newly synthesized Cd-kapellasite, we found an unprecedented series of magnetization plateaus, which can be interepreted as either close or loose packing of the emergent magnon. Second in Herbertsmithite we observed a robust 1/3 plateau despite presence of significant chemical disorder. These results hint at stabillization of the emergent magnon in real kagome antiferromagnets.
11:35 - 12:00 Flavien Museur (Institut Néel & Université Grenoble Alpes)
Quantum aspects of fragmentation in kagome ice
The apparent fragmentation of magnetic moments in spin ice systems into coexisting longitudinal, transverse and harmonic parts is a direct consequence of emergent electromagnetism in these systems. At the microscopic level, moments of fixed length act as elements of this emergent lattice field from which topological defects - magnetic monopoles, can be excited. By construction, the leftover is the sum of the divergence-free (transverse) and harmonic parts. In quantum spin ice systems the emergence is even more striking, as small quantum fluctuations give to the transverse part the dynamics of a compact, U (1) gauge theory reminiscent of quantum electromagnetism. The low energy emergent photon excitation is a signature of a U (1) spin liquid. In this talk we concentrate on a two dimensional analogue, the partially ordered topological phase of kagome spin ice, stabilised either through long range interactions or by placing an external field along the [111] direction in pyrochlore spin ice. In the classical case the transverse part remains disordered, corresponding to a Coulomb phase with ferromagnetic, dipolar correlations and maps directly to a problem of close packed, hard-core dimers. Using degenerate perturbation theory, the transverse fragment of the spins of an XXZ hamiltonian project onto a quantum dimer model on the dual, honeycomb lattice. We discuss the stability of this phase in the presence of small quantum fluctuations and the consequences for magnetic correlations. We show that the onset of the predicted long-range dimer order is explicitly reflected in the transverse part of the emergent field, accompanied by enhanced quantum fluctuations. As a consequence, the dimer ordering can be directly observed by neutron scattering and we compare the structure factor of this order to existing QMC simulations of quantum kagome ice.
poster flash talks I
poster session I
Chair: Mike Zhitomirsky
14:00 - 15:20 Kate Ross (Colorado State University)
Tutorial talk: Neutron Scattering and Frustrated Magnetism
Chair: Paula Mellado
15:30 - 15:55 Natalia Chepiga (Kavli Institute of Nanoscience)
Floating, critical and dimerized phases in a frustrated spin-3/2 chain
I will discuss spontaneous dimerization and emergent criticality in a spin-3/2 chain with antiferromagnetic nearest-neighbor J1, next-nearest-neighbor J2 and three-site J3 interactions. In the absence of three-site interaction J3, I will provide evidence that the model undergoes a remarkable sequence of three phase transitions as a function of J2/J1, going successively through a critical commesurate phase, a partially dimerized gapped phase, a critical floating phase with quasi-long-range incommensurate order, to end up in a fully dimerized phase at very large J2/J1. In the field theory language, this implies that the coupling constant of the marginal operator responsible for dimerization changes sign three times. For large enough J3, the fully dimerized phase is stabilized for all J2, and the phase transitions between the critical and floating phases and the dimerized phase are both Wess-Zumino-Witten (WZW) SU(2)_3 along part of the boundary and turn first order at some point due to the presence of a marginal operator in the WZW SU(2)_3 model. By contrast, the transition between the two dimerized phase is always first order, and the phase transitions between the partially dimerized phase and the critical phases are Kosterlitz-Thouless. Finally, I will discuss the intriguing spin-1/2 edge states that emerge in the partially dimerized phase for even chains. Unlike their counterparts in the spin-1 chain, they are not confined and disappear upon increasing J2 in favour of a reorganization of the dimerization pattern.
15:55 - 16:20 Michele Fava (Oxford University)
Glide symmetry breaking and Ising criticality in the quasi-1D magnet \(CoNb_2O_6\)
The quasi-1D magnetic insulator CoNb2O6 hosts a paradigmatic example of a quantum phase transition in the Ising universality class. In addition, inelastic neutron scattering investigations of the spin-dynamics at low temperature uncovered a rich phenomenology both in the ferromagnetic and in the paramagnetic phases. However, the understanding of experimental data in these regimes is limited by the knowledge of the spin-exchange Hamiltonian of the material. Analysing the spatial symmetry of the material, I will present a microscopic Hamiltonian which reproduces the entirety of the experimental phenomenology observed to date. The symmetry analysis recalls that a chain in the material is buckled, with two sites in each unit cell related by a glide symmetry. In particular, the two sites-unit cell allows for the presence of staggered couplings, which are fundamental in reproducing the experimental data. Furthermore, the glide symmetry plays also a major role in the phase transition itself. In fact, the material lacks an on-site Ising symmetry and the Ising phase transition can be described as a spontaneous symmetry breaking of this symmetry. In relation to this I will discuss how the glide-symmetry breaking can be observed in the inelastic neutron scattering data. Finally, I will explain how kinematical considerations related to the glide symmetry can be used to interpret the phenomenon of quasi-particle breakdown in the paramagnetic phase. [1] Fava, Coldea, Parameswaran, Proc. Nat. Acad. Sci. USA 117, 25219 (2020)
16:20 - 16:45 Shiyu Deng (University of Cambridge)
Evolution of Structural, Magnetic and Electronic Properties with Pressure in TMPX3 van-der-Waals Compounds
The van-der-Waals compounds TMPX3 (e.g. FePS3) have proven to be ideal examples of two-dimensional antiferromagnets on a honeycomb lattice. At ambient pressure, the FePS3 features zigzag ferromagnetic chains which are coupled antiferromagnetically with its neighbours. For a long time, the frustrated magnetic configuration was understood in terms of a spin Hamiltonian with equal exchange parameters between equivalent neighbours. Recent study [1], however, introduced a biquadratic exchange term to resolve the discrepancies between experimental neutron diffraction data and theoretical modelling. In addition, the TMPX3 compounds are Mott or charge-transfer insulators. Our recent studies [2-4] have reported pressure-induced insulator-to-metal transitions, together with novel magnetic and crystalline phases. There are also reports of superconductivity in a related member of this family of compounds [5]. To further understand the magnetic frustration at ambient pressure, we performed first-principles calculations and DFT+U studies to elucidate the magnetoelastic coupling between lattice and magnetic frustration. A random structure search is also performed to understand the structure and physical property evolution with pressure. Our computational explorations are expected to guide the discovery of structural and magnetic phases in the vdW TMPX3 compounds. [1] A. R. Wildes, et al., J. Appl. Phys. 127, 223903 (2020). [2] C. R. S. Haines, et al., Phys. Rev. Lett. 121, 266801 (2018). [3] M. J. Coak, et al., J. Phys. Condens. Matter 32, 124003 (2020). [4] M.J. Coak, et al., Phys. Rev. X, Oct (2020) Accepted [5] Y. Wang, et al., Nat. Commun. 9, 1914 (2018).
16:45 - 17:10 Ruben Verresen (Harvard University)
Prediction of Toric Code Topological Order from Rydberg Blockade
The physical realization of $Z_2$ topological order as encountered in the paradigmatic toric code has proven to be an elusive goal. In this talk, I will show that this phase of matter can be created in a two-dimensional array of strongly interacting Rydberg atoms on a ruby lattice. We will first consider a Rydberg blockade model: this effectively realizes a monomer-dimer model on the kagome lattice with a single-site kinetic term, for which we observe a Z_2 spin liquid using the numerical density matrix renormalization group method. Next, this phase is shown to persist upon including realistic, algebraically-decaying van der Waals interactions. Moreover, one can directly access the topological loop operators of this model, which can be measured experimentally using a dynamic protocol, providing a ``smoking gun'' experimental signature of the topological phase. Time permitting, I will show how to trap an emergent anyon and realize different topological boundary conditions, and more broadly discuss the implications for exploring fault-tolerant quantum memories. This talk is based on work with Mikhail Lukin and Ashvin Vishwanath (arxiv:2011.12310).
17:10 - 17:35 Shiyu Zhou (Boston University)
Experimental Realization of Spin Liquids in a Programmable Quantum Device
We build and probe a $\mathbb{Z}_2$ spin liquid in a programmable quantum device, the D-Wave DW-2000Q. To realize this state of matter, we design a Hamiltonian with combinatorial gauge symmetry using only pairwise-qubit interactions and a transverse field, i.e., interactions which are accessible in this quantum device. The combinatorial gauge symmetry remains exact along the full quantum annealing path, landing the system onto the classical 8-vertex model at the endpoint of the path. The output configurations from the device allows us to directly observe the loop structure of the model. Moreover, we deform the Hamiltonian so as to vary the weights of the 8 vertices and show that we can selectively attain the 6-vertex (ice model), or drive the system into a ferromagnetic state. We present studies of the phase diagram of the system as function of the 8-vertex deformations and effective temperature, which we control by varying the relative strengths of the programmable couplings, and we show that the experimental results are consistent with theoretical analysis. Finally, we identify additional capabilities that, if added to these quantum devices, would allow us to realize $\mathbb{Z}_2$ quantum spin liquids on which to build topological qubits.
17:35 - 18:00 Michael Flynn (University of California, Davis)
On two phases inside the Bose condensation dome of \(Yb_2Si_2O_7\)
In the context of quantum magnetism, there are a broad range of materials which are accurately described by models of weakly coupled spin dimers. Such models are well-understood theoretically, and exhibit magnetic field-induced phase transitions which are naturally described in terms of equivalent Bose-Einstein condensation (BEC) problems. Recently, experimental data on a system of weakly coupled dimers, Yb$_2$Si$_2$O$_7$, revealed an asymmetric BEC dome. To study this behavior, we examine modifications to the Heisenberg model on a breathing honeycomb lattice, showing that this physics can be explained by competing anisotropic perturbations. We employ a gamut of analytical and numerical techniques to show that the anisotropy yields a field driven phase transition from a state with broken Ising symmetry to a phase which breaks no symmetries and crosses over to the polarized limit. From the BEC perspective, this yields insights into the behavior of bosonized spin models in which an external magnetic field couples to a non-conserved quantity.
Chair: Oksana Zaharko
18:00 - 19:20 SungBin Lee (KAIST)
Tutorial talk: Frustrated magnetism and microscopic models
PYROCHLORE
Chair: Natalia Perkins
19:30 - 19:50 Danielle Yahne (Colorado State University)
Understanding Reentrance in Frustrated Magnets: the Case of the \(Er_2Sn_2O_7\) Pyrochlore
In frustrated magnets, most of the focus has been devoted to the physics near zero temperature. However, even in the presence of very strong frustration, the majority of frustrated magnetic materials ultimately develop long-range order or display spin-glass freezing at a nonzero critical temperature $T_c$. It therefore seems natural to ask what behavior near $T_c$ may inform on the zero-temperature ground state physics. We consider such a situation in Er$_2$Sn$_2$O$_7$, a pyrochlore antiferromagnet known to be strongly frustrated and residing at the boundary of two classical phases. We specifically study a recurrent aspect of frustrated magnetic systems observed at nonzero temperature: reentrance. Reentrance refers to when a system, after developing an ordered phase, returns to (reenters) its original, less ordered, phase as some external parameter is continuously tuned. Reentrance has been found in a variety of systems, from spin glasses to black hole thermodynamics, but its presence is often unexpected, and the microscopic mechanisms behind it have not been studied in detail. We use a combination of experimental and theoretical techniques to study a single crystal of the frustrated pyrochlore magnet Er$_2$Sn$_2$O$_7$, taking advantage of the recent advance in synthesis which enables the production of rare-earth stannate single crystals. We show Er$_2$Sn$_2$O$_7$ has multiple instances of reentrance in its field vs temperature phase diagram for fields along the three high symmetry directions ([100], [110], and [111]). Through classical Monte Carlo simulations, mean field theory, and classical linear spin-wave expansion, we propose that the origins of reentrance are linked to $T=0$ multi-phase competition induced by either the applied field or the inherent proximity to a competing zero field phase. The ground state phase competition enhances thermal fluctuations that entropically stabilize the ordered phase, leading to increased transition temperatures for certain field values. Our work represents a detailed look into the mechanisms responsible for reentrance in frustrated magnets, which may serve as a guide for other reentrant phenomena in physics.
19:50 - 20:10 Allen Scheie (Oak Ridge National Laboratory)
Zero-Field Spin Waves and Phase Coexistence in \(Yb_2Ti_2O_7\)
We characterize the ground state magnetism of pyrochlore Yb$_2$Ti$_2$O$_7$ in a [111] magnetic field using inelastic neutron scattering from the CNCS spectrometer at SNS. Our measurements reveal broadened but coherent zero-field excitations and sharp high-field spin waves modes. Comparison to linear spin wave theory allows us to distinguish between different proposed Hamiltonians for Yb$_2$Ti$_2$O$_7$ and reveals features of antiferromagnetism in the low-field spectrum. High-field [111] spin wave fits show that Yb$_2$Ti$_2$O$_7$ is extremely close to an antiferromagnetic phase boundary. This proximity allows for stable regions of antiferromagnetism, which may be responsible for the unusual magnetic behavior of this compound.
Chair: Philippe Mendels
08:00 - 09:20 Sarah Dunsiger (Simon Fraser University)
Tutorial talk: Spin Liquid, Spin Ice, Spin Glass - an Introduction to Muon Spin Relaxation
Chair: Sylvain Petit
09:30 - 09:50 Victor Porée (Paul Scherrer Institut)
Disorder-induced correlated phases in non-Kramers rare-earth pyrochlores
09:50 - 10:10 Kota Mitsumoto (Toyota Physical and Chemical Research Institute)
Spin-Orbital Glass Transition in a Model of a Frustrated Pyrochlore Magnet without Quenched Disorder
10:10 - 10:30 Attila Szabó (University of Oxford)
Monopole density and antiferromagnetic domain control in spin-ice iridates
Magnetic pyrochlore oxides have attracted significant interest due to their geometrically frustrated lattice, which acts to suppress long range magnetic order and leads to a variety of unusual magnetic ground states and exotic excitations. Some of the most studied pyrochlores are the spin ices Ho2Ti2O7 and Dy2Ti2O7, where the ferromagnetically frustrated rare earth ions result in magnetic monopole quasiparticles. Despite extensive work on spin ices, a reliable experimental indicator of the density of magnetic monopoles in spin-ice systems is yet to be found. In this talk, we present magnetisation and resistivity measurements on new single crystals of the pyrochlore iridate Ho2Ir2O7, where the low-temperature holmium spin ice is coupled to antiferromagnetically ordered itinerant iridium moments. Our experiments, in combination with dipolar Monte Carlo simulations of the rare earth moments using an effective representation of the Ir moments as local fields [1], show that the magnetoresistance is highly sensitive to the density of monopoles in a way that holds promise to quantitatively measure it in experiments. We argue that this is due to iridium conduction electrons scattering off the Coulombic magnetic and dipolar electric [2] field of the monopoles. Our result provides a powerful and versatile new tool for the study of spin-ice systems. We further show that, for certain orientations, an applied magnetic field couples strongly to the antiferromagnetically ordered iridium ions via the large holmium moments. This allows us to manipulate the antiferromagnetic domains via an external field, which is evidenced by the appearance of a strong and correlated hysteresis in both the magnetisation and magnetoresistance. The precise control of antiferromagnetic domain walls is a key goal for the design of next-generation spintronic devices [3]. Our results provide a way forward to how this target may be achieved using externally applied magnetic fields and a peculiar interplay between frustrated localised moments and itinerant antiferromagnetism. [1] E. Lefrançois, V. Cathelin, E. Lhotel, J. Robert, P. Lejay, C. V. Colin, B. Canals, F. Damay, J. Ollivier, B. Fåk, L. C. Chapon, R. Ballou, and V. Simonet, Fragmentation in spin ice from magnetic charge injection. Nat. Commun. 8, 209 (2017). [2] D. I. Khomskii, Electric dipoles on magnetic monopoles in spin ice. Nat. Commun. 3, 904 (2012). [3] V. Baltz, A. Manchon, M. Tsoi, T. Moriyama, T. Ono, and Y. Tserkovnyak, Antiferromagnetic spintronics. Rev. Mod. Phys. 90, 015005 (2018).
10:30 - 10:50 Nikita Astrakhantsev (University of Zurich)
Broken-Symmetry Ground States of Quantum Magnetism on the Pyrochlore Lattice
The spin-1/2 Heisenberg model on the pyrochlore lattice is an iconic frustrated three-dimensional spin system with a rich phase diagram. Besides hosting several ordered phases, the case with only nearest-neighbor antiferromagnetic interactions is debated for potentially realizing a spin-liquid ground state. Here, we contest this hypothesis with an extensive numerical investigation using both exact diagonalization and several complementary variational techniques. Specifically, we employ a Pfaffian-type many-variable Monte Carlo ansatz and convolutional neural network quantum states for calculations with up to $3\times 4^3$ and $3 \times 3^3$ spins, respectively. We demonstrate that these techniques yield consistent results, allowing for reliable extrapolations to the thermodynamic limit. Our main results are (1) a determination of the phase transition between the putative spin liquid phase and the neighboring magnetically ordered phase and (2) a careful characterization of the ground state in terms of symmetry breaking tendencies. We find clear indications for spontaneously broken inversion and rotational symmetry, calling the quantum spin-liquid scenario into question. Our work showcases how many-variable variational techniques can be used to make progress in answering challenging questions about three-dimensional frustrated quantum magnets.
10:50 - 11:05 Robin Schäfer (MPI-PKS Dresden)
The pyrochlore S=1/2 Heisenberg antiferromagnet at finite temperature
We use a combination of three computational methods to investigate the notoriously difficult frustrated three dimensional pyrochlore $S=1/2$ quantum antiferromagnet, at finite temperature, $T$: canonical typicality for a finite cluster of $2\times 2 \times 2$ unit cells (i.e. $32$ sites), a finite-$T$ matrix product state method on a larger cluster with $48$ sites, and the numerical linked cluster expansion (NLCE) using clusters up to $25$ lattice sites, including non-trivial hexagonal and octagonal loops. We calculate thermodynamic properties (energy, specific heat capacity, entropy, susceptibility, magnetization) and the static structure factor. We find a pronounced maximum in the specific heat at $T = 0.57 J$, which is stable across finite size clusters and converged in the series expansion. At $T\approx 0.25J$ (the limit of convergence of our method), the residual entropy per spin is $0.47 k_B \ln2$, which is relatively large compared to other frustrated models at this temperature. We also observe a non-monotonic dependence on $T$ of the magnetization at low magnetic fields, reflecting the dominantly non-magnetic character of the low-energy states. A detailed comparison of our results to measurements for the $S=1$ material NaCaNi$_2$F$_7$ yields a rough agreement of the functional form of the specific heat maximum, which in turn differs from the sharper maximum of the heat capacity of the spin ice material Dy$_2$Ti$_2$O$_7$.
11:05 - 11:20 Imre Hagymási (MPI-PKS Dresden)
Possible inversion symmetry breaking in the S=1/2 pyrochlore Heisenberg magnet
We address the ground state properties of the long-standing and much-studied three dimensional quantum spin liquid candidate, the $S=\frac 1 2$ pyrochlore Heisenberg antiferromagnet. By using $SU(2)$ DMRG, we are able to access cluster sizes of up to 108 spins. Our most striking finding is a robust spontaneous inversion symmetry breaking, reflected in an energy density difference between the two sublattices of tetrahedra, familiar as a starting point of earlier perturbative treatments. We also determine the ground state energy, $E_0/N_\text{sites} = -0.488(12) J$, by combining extrapolations of DMRG with those of a numerical linked cluster expansion. These findings suggest a scenario in which a finite-temperature spin liquid regime gives way to a symmetry-broken state at low temperatures.
11:20 - 11:40 Siddhardh Morampudi (Massachusetts Institute of Technology)
Dynamics of Coulomb quantum spin liquids
Quantum spin liquids are phases of matter with fractionalized excitations and emergent gauge fields. However, distinct and unambiguous signatures of spin liquids are hard to come by, especially in the experimentally relevant scenario of non-zero temperatures. We show that the dynamics of spinon production in Coulomb spin liquids, as measured by the dynamic structure factor, contain such distinct signatures due to dramatic interaction effects and unusual energy scales. These include Sommerfeld enhancement of the pair production cross section due to the emergent Coulomb interaction and Cerenkov radiation due to the presence of slow photons. Varying the temperature provides characteristic changes in these effects due to the bath of thermal photons and magnetic monopoles. Our results are consistent with recent numerics in lattice models and point to the important role of interactions in understanding dynamics of spin liquids.
11:40 - 12:00 Salvatore Pace (University of Cambridge)
The emergent fine structure constant in quantum spin ice
Condensed matter systems act as mini-universes with emergent low-energy properties drastically different from those of the standard model. One prominent example is quantum spin ice which hosts an emergent quantum electrodynamics (QED), one whose magnetic monopoles set it apart from the familiar QED of the world we live in. In addition to new exotic excitations, the emergent QED's parameters such as the speed of light and fine-structure constant can be vastly different from those in vacuum QED. Here, I discuss recent work where we calculate the emergent fine structure constant in the canonical quantum spin ice model using exact diagonalization techniques. We find that is more than an order of magnitude greater than that in vacuum QED. Furthermore, by engineering the microscopic Hamiltonian, we find that the fine structure constant, the emergent speed of light, and all other parameters of the emergent QED are tunable. In particular, the fine structure constant can be tuned all the way from zero up to what is believed to be the strongest possible coupling beyond which QED confines. This points to quantum spin ice as an ideal candidate to explore new regimes of QED in magnetic materials and synthetic realizations.
poster flash talks II
poster session II
Chair: Masafumi Udagawa
08:00 - 09:20 Frank Pollmann (Technical University Munich)
Dynamical properties of quantum spin systems from DMRG simulations
CLASSIFICATION, NEW SL, TOPO MAGNONS
Chair: Paul McClarty
09:30 - 09:55 Han Yan (Okinawa Institute of Science and Technology)
Rank-2 U(1) gauge theory and fracton excitations on the breathing pyrochlore lattice
H Yan, O Benton, LDC Jaubert, N Shannon On a lattice without Lorentz symmetry of spacetime, many exotic versions of gauge theory other than the conventional electrodynamics can emerge. Among them, the symmetric rank-2 U(1) gauge theories are particularly interesting [1]. With electric and gauge fields being symmetric tensors, these theories enforce not only charge but also dipole or other multipole conservation laws. As a consequence, they may host charge excitations, dubbed "fractons", that are entirely immobile in the system. Fracton physics has sparked tremendous interest across quantum information, beyond-topological-order quantum phases of matter, and high energy theory [2]. But relatively little is known about how to realise them in experiment, due to the complexity of the prototype fracton models. Here we report a surprisingly simple frustrated magnetic system that realizes a classical symmetric tensor gauge theory [3]. The model is a breathing pyrochlore lattice with dominant Heisenberg interactions between nearest spins, and Dzyaloshinskii–Moriya interactions on half the tetrahedra. We discover a classical spin liquid phase, with its low-energy effective theory described by a symmetric rank-2 electric field constrained by the Gauss's laws of a vector charge. Its vector charge excitations are fractons. We also make explicit experimental predictions of such a spin liquid. More specifically, the four-fold pinch point will be observed in neutron scattering. Given the common ingredients needed in the model, we hope it will guide future material synthesis in search of fracton matter. In particular, Yb-based breathing pyrochlores are identified to be promising candidates. [1] M. Pretko, Subdimensional Particle Structure of Higher Rank U(1) Spin Liquids, Phys. Rev. B 95, 115139 (2017) [2] M. Pretko, X. Chen, Y. You, Fracton phases of matter, Int. J. of Mod. Phys. A 35, 06, 2030003 (2020) [3] H. Yan, O. Benton, L. D. C. Jaubert, and Nic Shannon, Rank-2 U(1) Spin Liquid on the Breathing Pyrochlore Lattice, Phys. Rev. Lett. 124, 127203 (2020)
09:55 - 10:20 Yizhi You (Princeton University)
Emergent fractons and Elusive Bose Metal from frustrated magnetism
In this talk, I will show that the defects of the valence plaquette solid(VPS) order parameter, in addition to possessing non-trivial quantum numbers, have fracton mobility constraints in the VPS phase. The spinon inside a single vortex cannot move freely in any direction, while a dipolar pair of vortices with spinon pairs can only move perpendicular to its dipole moment. These mobility constraints, while they persist near QCP, can potentially inhibit the condensation of spinons and preclude a continuous transition from the VPS to the Néel antiferromagnet. Instead, the VPS melting transition can be driven by the proliferation of spinon dipoles. In particular, we argue that a 2d VPS can melt into a stable gapless phase in the form of an algebraic bond liquid with algebraic correlations and long-range entanglement. Such a bond liquid phase yields a concrete example of the elusive 2d Bose metal with symmetry fractionalization and UV-IR mixing.
10:20 - 10:45 Owen Benton (MPI-PKS Dresden)
Topological Classification of Classical Spin Liquids
Classical spin liquids (CSLs) are high-entropy, low-temperature phases of matter formed in classical spin systems subject to a local constraint. They show up in some of the most celebrated problems in frustrated magnetism, most famously spin ice. CSLs can be considered as classical analogues of quantum spin liquids (QSLs), and are in some sense simpler than their quantum counterparts, being without quantum coherence. Strangely, however, while the classification of QSLs has a well-developed machinery via projective symmetry group analysis, CSLs have no such systematic classification scheme. Here, we propose an approach to classifying classical spin liquids using topological invariants. By this means we are able to distinguish both algebraic and short-range correlated CSLs from trivial paramagnetic states and to establish the possibility of topological phase transitions between classical spin liquids. Examples will be given of models in two and three dimensions which exhibit a series of such phase transitions. We also see that the concept of ``symmetry protected topological phases'' has relevance to CSLs, just as it does to quantum systems. This work thus establishes a new perspective on an important class of frustrated magnets.
10:45 - 11:10 Xue-Yang Song (Harvard University)
Dirac spin liquids on the square and triangular lattices: Unified theory for 2D quantum magnets
Quantum magnets provide the simplest example of strongly interacting quantum matter, yet they continue to resist a comprehensive understanding above one spatial dimension. We explore the Dirac spin liquid (DSL) in 2D lattices, a version of Quantum Electrodynamics (QED3) with four flavors of Dirac fermions coupled to photons. Importantly, its excitations include magnetic monopoles that drive confinement, and the symmetry actions on monopoles contain crucial information about the DSL states. The underlying band topology of spinon insulators, e.g., wannier insulator protected by rotation etc, determines the elusive Berry phase of monopole. The stability of the DSL is enhanced on triangular lattices compared to square(bipartite) lattices. We obtain the universal signatures of the DSL on triangular and kagome lattices, including those of monopole excitations, as a guide to numerics and experiments on existing materials. Even when unstable, the DSL helps unify and organize the plethora of competing orders in correlated two-dimensional materials. Time permitting I will describe recent results on chiral spin liquid states on the triangular lattice and the superconducting/metallic phases that emerge on lightly doping them
11:10 - 11:35 Hwanbeom Cho (Oxford University)
Pressure-induced transition from Jeff=1/2 to S=1/2 states in \(CuAl_2O_4\)
The spin-orbit entangled (SOE) Jeff-state has been a fertile ground to study novel quantum phenomena. Contrary to the conventional weakly correlated Jeff=1/2 state of 4d and 5d transition metal compounds, the ground state of CuAl2O4 hosts a Jeff=1/2 state with a strong correlation of Coulomb U. Here, we report that surprisingly Cu2+ ions of CuAl2O4 overcome the otherwise usually strong Jahn-Teller distortion and instead stabilize the SOE state, although the cuprate has relatively small spin-orbit coupling. From the x-ray absorption spectroscopy and high-pressure x-ray diffraction studies, we obtained definite evidence of the Jeff=1/2 state with a cubic lattice at ambient pressure. We also found the pressure-induced structural transition to a compressed tetragonal lattice consisting of the spin-only S=1/2 state for pressure higher than Pc=8 GPa. This phase transition from the Mott insulating Jeff=1/2 to the S=1/2 states is a unique phenomenon and has not been reported before. Our study offers a rare example of the SOE Jeff-state under strong electron correlation and its pressure-induced transition to the S=1/2 state.
11:35 - 12:00 Miska Elliot (Oxford University)
Visualization of Isospin Momentum Texture of Dirac Magnons and Excitons in a Honeycomb Quantum Magnet
Band topology in electronic systems is known to have profound consequences on various observable properties in semi-metals and certain insulators. Insights from this field have reached into many areas of physics and in this talk we describe certain universal signatures of band topology in propagating bosonic quasi-particles. We show that the Dirac magnon material CoTiO3 provides an experimental demonstration of a universal winding of the inelastic neutron scattering intensity around linear touching points of magnetic excitations - magnons and spin-orbit excitons - that originates from the isospin texture of the quasiparticle wavefunction in momentum space. https://arxiv.org/abs/2007.04199
poster flash talks III
poster session III
Chair: Bella Lake
14:00 - 15:20 Jeffrey G. Rau (University of Windsor)
Tutorial talk: The physics of the Kitaev model and its realization in Kitaev materials
KITAEV
Chair: Ioannis Rousochatzakis
15:30 - 15:50 Matthias Gohlke (Okinawa Institute of Science and Technology)
Field-induced pseudo-Goldstone mode and nematic states in Kitaev magnets
The appearance of nontrivial phases in Kitaev materials exposed to an external magnetic field has recently been a subject of intensive studies. Here, we elucidate the relation between the field-induced ground states of the classical and quantum spin models proposed for such materials, by using infinite density matrix renormalization group and linear spin wave theory. We consider an extended Kitaev model with additional symmetric off-diagonal spin exchanges, $\Gamma$ and $\Gamma'$. Focusing on the magnetic field along the $[111]$ direction, we explain the origin of a nematic paramagnet, which breaks the lattice-rotational symmetry and exists in an extended window of magnetic field. We show, that this phenomenon can be understood as the effect of quantum order-by-disorder in the frustrated ferromagnet phase with a continuous manifold of degenerate ground states discovered in the corresponding classical model. We present dynamical spin structure factors to enable comparison with inelastic neutron scattering experiments on Kitaev materials in an external magnetic field along the $[111]$ direction. In particular, the nematic paramagnet exhibits a characteristic pseudo-Goldstone mode which results from the lifting of a continuous degeneracy via quantum fluctuations.
15:50 - 16:10 Jiucai Wang (Tsinghua University)
Multinode quantum spin liquids on the honeycomb lattice
cently it was realized that the zigzag magnetic order in Kitaev materials can be stabilized by small negative off-diagonal interactions called the $\Gamma'$ terms. To fully understand the effect of the $\Gamma'$ interactions, we investigate the quantum $K$-$\Gamma$-$\Gamma'$ model on the honeycomb lattice using the variational Monte Carlo method. Two multinode Z$_2$ quantum spin liquids (QSLs) are found at $\Gamma'>0$, one of which is the previously found proximate Kitaev spin liquid called the PKSL14 state which shares the same projective symmetry group (PSG) with the Kitaev spin liquid. A remarkable result is that a $\pi$-flux state with a distinct PSG appears at larger $\Gamma'$. The $\pi$-flux state is characterized by an enhanced periodic structure in the spinon dispersion in the original Brillouin zone (BZ), which is experimentally observable. Interestingly, two PKSL8 states are competing with the $\pi$-flux state and one of them can be stabilized by six-spin ring-exchange interactions. The physical properties of these nodal QSLs are studied by applying magnetic fields and the results depend on the number of cones. Our study infers that there exist a family of zero-flux QSLs that contain $6n+2, n\in\mathbb Z$ Majorana cones and a family of $\pi$-flux QSLs containing $4(6n+2)$ cones in the original BZ. It provides guidelines for experimental realization of non-Kitaev QSLs in relevant materials.
16:10 - 16:30 Adhip Agarwala (Tata Institute of Fundamental Research, Bengaluru)
Gapless state of interacting Majorana fermions in a strain-induced Landau level
Mechanical strain can generate a pseudo-magnetic field, and hence Landau levels (LL), for low energy excitations of quantum matter in two dimensions. We study the collective state of the fractionalised Majorana fermions arising from residual generic spin interactions in the central LL, where the projected Hamiltonian reflects the spin symmetries in intricate ways: emergent U (1) and particle-hole symmetries forbid any bilinear couplings, leading to an intrinsically strongly interacting system; also, they allow the definition of a filling fraction, which is fixed at 1/2. We argue that the resulting many-body state is gapless within our numerical accuracy, implying ultra-short-ranged spin correlations, while chirality correlators decay algebraically. This amounts to a Kitaevnon-Fermi'spin liquid, and shows that interacting Majorana Fermions can exhibit intricate behaviour akin to fractional quantum Hall physics in an insulating magnet.
16:30 - 16:50 Sambuddha Sanyal (IISER Tirupati)
Emergent moments and random singlet physics in a Majorana spin liquid
We exhibit an exactly solvable example of a SU(2) symmetric Majorana spin liquid phase, in which quenched disorder leads to random-singlet phenomenology. More precisely, we argue that a strong-disorder fixed point controls the low temperature susceptibility $\chi(T)$ of an exactly solvable $S=1/2$ model on the decorated honeycomb lattice with quenched bond disorder and/or vacancies, leading to $\chi(T) = {\mathcal C}/T+ {\mathcal D} T^{\alpha(T) - 1}$ where $\alpha(T) \rightarrow 0$ as $T \rightarrow 0$. The first term is a Curie tail that represents the emergent response of vacancy-induced spin textures spread over many unit cells: it is an intrinsic feature of the site-diluted system, rather than an extraneous effect arising from isolated free spins. The second term, common to both vacancy and bond disorder (with different $\alpha(T)$ in the two cases) is the response of a random singlet phase, familiar from random antiferromagnetic spin chains and the analogous regime in phosphorus-doped silicon (Si:P).
16:50 - 17:10 Wen-Han Kao (University of Minnesota)
Vacancy-induced Low-energy Density of States in the Kitaev Spin Liquid
Since 2006, the Kitaev honeycomb model has attracted significant attention due to the exactly solvable spin-liquid ground state with fractionalized Majorana excitations [1] and the possible materialization in magnetic Mott insulators with strong spin-orbit couplings [2]. Recently, the 5d-electron compound H$_{3}$LiIr$_{2}$O$_{6}$ has shown to be a strong candidate of Kitaev physics considering the absence of long-range ordered magnetic state [3]. In this work [4], we demonstrate that a finite density of random vacancies gives rise to a remarkable pile up of low-energy states and is possibly related to the observed low-temperature upturn in the specific heat measurement of H$_{3}$LiIr$_{2}$O$_{6}$. We study both the free-flux and the vacancy-induced bound-flux background and their responses to additional time-reversal symmetry-breaking term, which imitates the magnetic field in real experiments. [1] A. Kitaev, Ann. Phys. 321, 2 (2006). [2] G. Jackeli and G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009). [3] K. Kitagawa, T. Takayama, Y. Matsumoto, A. Kato, R. Takano,Y. Kishimoto, S. Bette, R. Dinnebier, G. Jackeli, and H. Takagi, Nature 554, 341 (2018). [4] W.-H. Kao, J. Knolle, G. B. Halász, R. Moessner, N. B. Perkins, arXiv:2007.11637 (2020).
17:10 - 17:30 Joshuah Heath (Boston College)
Evidence of a weakly-correlated Majorana liquid in the Kitaev magnet \(Ag_3LiIr_2O_6\)
Kitaev magnets have gained a huge amount of interest in recent years due to the possibility of such materials hosting a highly-entangled quantum spin liquid (QSL) ground state. Such materials are characterized by a honeycomb lattice structure and can be analytically solved by mapping the localized spins into itinerant and localized Majorana excitations. Interestingly, recent experiments on Ag3LiIr2O6 have shown that spiral incommensurate order coexists with a metal-like phase characterized by a finite Sommerfeld coefficient. In this work, we describe this exotic phase by considering the effects of interaction on the statistical energy of Landau-Fermi quasiparticles near the Majorana Fermi surface. The resulting specific heat is found to be dominated by a quadratic-temperature dependence with a strong non-analytic contribution, in agreement with present experiments on the silver-lithium iridate at zero and finite external magnetic field. Co-Authors: Faranak Bahrami, Roman Movshovich, Xiao Chen, Kevin Bedell, & Fazel Tafti. Work at BC was supported by the John H. Rourke Endowment Fund and the NSF award number DMR–1708929. Work at Los Alamos was conducted under the auspices of the U.S. Department of Energy.
17:30 - 17:50 Kira Riedl (Goethe University)
Magnetoelastic coupling and effects of uniaxial strain in \(\alpha-RuCl_3\) from first principles
Kitaev materials are prime examples where the orbital and spin degrees of freedom can not be understood separately, and instead are formulated jointly through so-called "pseudospins". In contrast to conventional spin-lattice coupling, the spin-orbital nature of the pseudospins foreshadows a much more delicate coupling to the lattice. Using large-scale first-principles simulations we obtain a magnetoelastic Hamiltonian of $\alpha$-RuCl$_3$, that reveals a highly nontrivial interplay of different magnetic interactions with the lattice [1]. We reproduce and explain recently measured magnetostriction [2], using exact diagonalization on our magnetoelastic model, disentangling contributions related to different anisotropic interactions and $g$ factors. Uniaxial strain perpendicular to the honeycomb planes is predicted to reorganize the relative coupling strengths, strongly enhancing the Kitaev interaction while simultaneously weakening the other anisotropic exchanges under compression. Uniaxial strain may therefore pose a fruitful route to experimentally tune $\alpha$-RuCl$_3$ nearer to the Kitaev limit. [1] D.A.S Kaib, S. Biswas, K. Riedl, S.M. Winter, R. Valenti, arXiv: 2008.08616 (2020). [2] S. Gass et al., PRB 101, 245158 (2020).
poster flash talks IV
poster session IV | CommonCrawl |
\begin{document}
\thispagestyle{plain}
\title{Commutators from a hyperplane of matrices}
\begin{abstract} Denote by $\operatorname{M}_n(\mathbb{K})$ the algebra of $n$ by $n$ matrices with entries in the field $\mathbb{K}$. A theorem of Albert and Muckenhoupt states that every trace zero matrix of $\operatorname{M}_n(\mathbb{K})$ can be expressed as $AB-BA$ for some pair $(A,B)\in \operatorname{M}_n(\mathbb{K})^2$. Assuming that $n>2$ and that $\mathbb{K}$ has more than $3$ elements, we prove that the matrices $A$ and $B$ can be required to belong to an arbitrary given hyperplane of $\operatorname{M}_n(\mathbb{K})$. \end{abstract}
\vskip 2mm \noindent \emph{AMS Classification:} 15A24, 15A30
\vskip 2mm \noindent \emph{Keywords:} commutator; trace; hyperplane; matrices
\vskip 4mm
\section{Introduction}
\subsection{The problem}
In this article, we let $\mathbb{K}$ be an arbitrary field. We denote by $\operatorname{M}_n(\mathbb{K})$ the algebra of square matrices with $n$ rows and entries in $\mathbb{K}$, and by $\frak{sl}_n(\mathbb{K})$ its hyperplane of trace zero matrices. The trace of a matrix $M \in \operatorname{M}_n(\mathbb{K})$ is denoted by $\operatorname{tr} M$. Given two matrices $A$ and $B$ of $\operatorname{M}_n(\mathbb{K})$, one sets $$[A,B]:=AB-BA,$$ known as the commutator, or Lie bracket, of $A$ and $B$. Obviously, $[A,B]$ belongs to $\frak{sl}_n(\mathbb{K})$. Although it is easy to see that the linear subspace spanned by the commutators is $\frak{sl}_n(\mathbb{K})$, it is more difficult to prove that every trace zero matrix is actually a commutator, a theorem which was first proved by Shoda \cite{Shoda} for fields of characteristic $0$, and later generalized to all fields by Albert and Muckenhoupt \cite{AlbertMuck}. Recently, exciting new developments on this topic have appeared: most notably, the long-standing conjecture that the result holds for all principal ideal domains has just been solved by Stasinski \cite{Stasinski} (the case of integers had been worked out earlier by Laffey and Reams \cite{LaffeyReams}).
Here, we shall consider the following variation of the above problem: \begin{center}Given a (linear) hyperplane $\mathcal{H}$ of $\operatorname{M}_n(\mathbb{K})$, is it true that every trace zero matrix is the commutator of two matrices of $\mathcal{H}$? \end{center}
Our first motivation is that this constitutes a natural generalization of the following result of Thompson:
\begin{theo}[Thompson, Theorem 5 of \cite{Thompson}]\label{hypercan} Assume that $n \geq 3$. Then, $[\frak{sl}_n(\mathbb{K}),\frak{sl}_n(\mathbb{K})]=\frak{sl}_n(\mathbb{K})$. \end{theo}
Another motivation stems from the following known theorem:
\begin{theo}[Proposition 4 of \cite{dSPlargedimprod}] Let $\mathcal{V}$ be a linear subspace of $\operatorname{M}_n(\mathbb{K})$ with $\operatorname{codim} \mathcal{V}<n-1$. Then, $\frak{sl}_n(\mathbb{K})=\operatorname{span} \bigl\{[A,B] \mid (A,B) \in \mathcal{V}^2\bigr\}$. \end{theo}
Thus, a natural question to ask is whether, in the above situation, every trace zero matrix is a commutator of two matrices of $\mathcal{V}$. Studying the case of hyperplanes is an obvious first step in that direction (and a rather non-trivial one, as we shall see).
An additional motivation is the corresponding result for products (instead of commutators) that we have obtained in \cite{dSPlargedimprod}:
\begin{theo}[Theorem 3 of \cite{dSPlargedimprod}] Let $\mathcal{H}$ be a (linear) hyperplane of $\operatorname{M}_n(\mathbb{K})$, with $n>2$. Then, every matrix of $\operatorname{M}_n(\mathbb{K})$ splits up as $AB$ for some $(A,B) \in \mathcal{H}^2$. \end{theo}
\subsection{Main result}
In the present paper, we shall prove the following theorem:
\begin{theo}\label{dSPcrochet} Assume that $\# \mathbb{K}>3$ and $n>2$. Let $\mathcal{H}$ be an arbitrary hyperplane of $\operatorname{M}_n(\mathbb{K})$. Then, every trace zero matrix of $\operatorname{M}_n(\mathbb{K})$ splits up as $AB-BA$ for some $(A,B)\in \mathcal{H}^2$. \end{theo}
Let us immediately discard an easy case. Assume that $\mathcal{H}$ does not contain the identity matrix $I_n$. Then, given $(A,B)\in \operatorname{M}_n(\mathbb{K})^2$, we have $$[\lambda I_n+A,\mu I_n+B]=[A,B]$$ for all $(\lambda,\mu)\in \mathbb{K}^2$, and obviously there is a unique pair $(\lambda,\mu)\in \mathbb{K}^2$ such that $\lambda I_n+A$ and $\mu I_n+B$ belong to $\mathcal{H}$. In that case, it follows from the Albert-Muckenhoupt theorem that every matrix of $\frak{sl}_n(\mathbb{K})$ is a commutator of matrices of $\mathcal{H}$. Thus, the only case left to consider is the one when $I_n \in \mathcal{H}$. As we shall see, this is a highly non-trivial problem. Our proof will broadly consist in refining Albert and Muckenhoupt's method.
\paragraph{} The case $n=2$ can be easily described over any field:
\begin{prop} Let $\mathcal{H}$ be a hyperplane of $\operatorname{M}_2(\mathbb{K})$. \begin{enumerate}[(a)] \item If $\mathcal{H}$ contains $I_2$, then $[\mathcal{H},\mathcal{H}]$ is a $1$-dimensional linear subspace of $\operatorname{M}_2(\mathbb{K})$. \item If $\mathcal{H}$ does not contain $I_2$, then $[\mathcal{H},\mathcal{H}]=\frak{sl}_2(\mathbb{K})$. \end{enumerate} \end{prop}
\begin{proof} Point (b) has just been explained. Assume now that $I_2 \in \mathcal{H}$. Then, there are matrices $A$ and $B$ such that $(I_2,A,B)$ is a basis of $\mathcal{H}$. For all $(a,b,c,a',b',c')\in \mathbb{K}^6$, one finds $$[aI_2+bA+cB\,, \,a'I_2+b'A+c'B]=(bc'-b'c)[A,B].$$ Moreover, as $A$ is a $2 \times 2$ matrix and not a scalar multiple of the identity, it is similar to a companion matrix, whence the space of all matrices which commute with $A$ is $\operatorname{span}(I_2,A)$. This yields $[A,B] \neq 0$. As obviously $\mathbb{K}=\bigl\{bc'-b'c\mid (b,c,b',c')\in \mathbb{K}^4\bigr\}$, we deduce that $[\mathcal{H},\mathcal{H}]=\mathbb{K}\, [A,B]$ with $[A,B] \neq 0$. \end{proof}
\subsection{Additional definitions and notation}
\begin{itemize} \item Given a subset $\mathcal{X}$ of $\operatorname{M}_n(\mathbb{K})$, we set $$[\mathcal{X},\mathcal{X}]:=\bigl\{[A,B] \mid (A,B)\in \mathcal{X}^2\bigr\}.$$
\item The canonical basis of $\mathbb{K}^n$ is denoted by $(e_1,\dots,e_n)$.
\item Given a basis $\mathcal{B}$ of $\mathbb{K}^n$, the matrix of coordinates of $\mathcal{B}$ in the canonical basis of $\mathbb{K}^n$ is denoted by $P_\mathcal{B}$.
\item Given $i$ and $j$ in $\mathopen{[\![} 1,n\mathclose{]\!]}$, one denotes by $E_{i,j}$ the matrix of $\operatorname{M}_n(\mathbb{K})$ with all entries zero except the one at the $(i,j)$-spot, which equals $1$.
\item A matrix of $\operatorname{M}_n(\mathbb{K})$ is \textbf{cyclic} when its minimal polynomial has degree $n$ or, equivalently, when it is similar to a companion matrix.
\item The $n$ by $n$ nilpotent Jordan matrix is denoted by $$J_n=\begin{bmatrix} 0 & 1 & & (0) \\ & \ddots & \ddots & \\ & & \ddots & 1 \\ (0) & & & 0 \end{bmatrix}.$$
\item A Hessenberg matrix is a square matrix $A=(a_{i,j}) \in \operatorname{M}_n(\mathbb{K})$ in which $a_{i,j}=0$ whenever $i>j+1$. In that case, we set $$\ell(A):=\bigl\{j \in \mathopen{[\![} 1,n-1\mathclose{]\!]} : \; a_{j+1,j} \neq 0\bigr\}.$$
\item One equips $\operatorname{M}_n(\mathbb{K})$ with the non-degenerate symmetric bilinear form $$b : (M,N) \mapsto \operatorname{tr}(MN),$$ to which orthogonality refers in the rest of the article. \end{itemize} Given $A \in \operatorname{M}_n(\mathbb{K})$, one sets $$\operatorname{ad}_A : M \in \operatorname{M}_n(\mathbb{K}) \mapsto [A,M] \in \operatorname{M}_n(\mathbb{K}),$$ which is an endomorphism of the vector space $\operatorname{M}_n(\mathbb{K})$; its kernel is the centralizer $$\mathcal{C}(A):=\bigl\{M \in \operatorname{M}_n(\mathbb{K}) : AM=MA\bigr\}$$ of the matrix $A$. Recall the following nice description of the range of $\operatorname{ad}_A$, which follows from the rank theorem and the basic observation that $\operatorname{ad}_A$ is skew-symmetric for the bilinear form $(M,N) \mapsto \operatorname{tr}(MN)$:
\begin{lemma}\label{imad} Let $A \in \operatorname{M}_n(\mathbb{K})$. The range of $\operatorname{ad}_A$ is the orthogonal of $\mathcal{C}(A)$, that is the set of all $N \in \operatorname{M}_n(\mathbb{K})$ for which $$\forall B \in \mathcal{C}(A), \; \operatorname{tr}(B\,N)=0.$$ \end{lemma}
In particular, if $A$ is cyclic then its centralizer is $\mathbb{K}[A]=\operatorname{span}(I_n,A,\dots,A^{n-1})$, whence $\operatorname{Im} (\operatorname{ad}_A)$ is defined by a set of $n$ linear equations:
\begin{lemma}\label{imadcyclic} Let $A \in \operatorname{M}_n(\mathbb{K})$ be a cyclic matrix. The range of $\operatorname{ad}_A$ is the set of all $N \in \operatorname{M}_n(\mathbb{K})$ for which $$\forall k \in \mathopen{[\![} 0,n-1\mathclose{]\!]}, \; \operatorname{tr}(A^k\,N)=0.$$ \end{lemma}
\begin{Rem}\label{specialcasesremark} Interestingly, the two special cases below yield the strategy for Shoda's approach and Albert and Muckenhoupt's, respectively: \begin{enumerate}[(i)] \item Let $D$ be a diagonal matrix of $\operatorname{M}_n(\mathbb{K})$ with distinct diagonal entries. Then, the centralizer of $D$ is the space $\mathcal{D}_n(\mathbb{K})$ of all diagonal matrices, and hence $\operatorname{Im} \operatorname{ad}_D$ is the space of all matrices with diagonal zero. As every trace zero matrix that is not a scalar multiple of the identity is similar to a matrix with diagonal zero \cite{Fillmore}, Shoda's theorem of \cite{Shoda} follows easily.
\item Consider the case of the Jordan matrix $J_n$. As $J_n$ is cyclic, Lemma \ref{imadcyclic} yields that $\operatorname{Im} (\operatorname{ad}_{J_n})$ is the set of all matrices $A=(a_{i,j}) \in \operatorname{M}_n(\mathbb{K})$ for which $\underset{k=1}{\overset{n-\ell}{\sum}} a_{k+\ell,k}=0$ for all $\ell \in \mathopen{[\![} 0,n-1\mathclose{]\!]}$. In particular, if $A=(a_{i,j}) \in \operatorname{M}_n(\mathbb{K})$ is Hessenberg, then this condition is satisfied whenever $\ell>1$, and hence $A \in \operatorname{Im} (\operatorname{ad}_{J_n})$ if and only if $\operatorname{tr} A=0$ and $\underset{k=1}{\overset{n-1}{\sum}} a_{k+1,k}=0$. Albert and Muckenhoupt's proof is based upon the fact that, except for a few special cases, the similarity class of a matrix must contain a Hessenberg matrix $A$ that satisfies the extra equation $\underset{k=1}{\overset{n-1}{\sum}} a_{k+1,k}=0$. \end{enumerate} \end{Rem}
\section{Proof of the main theorem}\label{finalproofsection}
\subsection{Proof strategy}\label{proofstrategy}
Let $\mathcal{H}$ be a hyperplane of $\operatorname{M}_n(\mathbb{K})$. We already know that $[\mathcal{H},\mathcal{H}]=\mathfrak{sl}_n(\mathbb{K})$ if $I_n \not\in \mathcal{H}$. Thus, in the rest of the article, we will only consider the case when $I_n \in \mathcal{H}$.
Our proof will use three basic but potent principles:
\begin{enumerate}[(1)] \item Given $A \in \mathfrak{sl}_n(\mathbb{K})$, if some $A_1 \in \mathcal{H}$ satisfies $A \in \operatorname{Im} (\operatorname{ad}_{A_1})$ and $\mathcal{C}(A_1) \not\subset \mathcal{H}$, then $A \in [\mathcal{H},\mathcal{H}]$. Indeed, in that situation, we find $A_2 \in \operatorname{M}_n(\mathbb{K})$ such that $A=[A_1,A_2]$, together with some $A_3 \in \mathcal{C}(A_1)$ for which $A_3 \not\in \mathcal{H}$. Then, the affine line $A_2+\mathbb{K} A_3$ is included in the inverse image of $\{A\}$ by $\operatorname{ad}_{A_1}$ and it has exactly one common point with $\mathcal{H}$.
\item Let $(A,B)\in \mathfrak{sl}_n(\mathbb{K})^2$ and $\lambda \in \mathbb{K}$. If there are matrices $A_1$ and $A_2$ such that $A=[A_1,A_2]$ and $\operatorname{tr}(B\,A_1)=\operatorname{tr}(B\,A_2)=0$, then we also have $\operatorname{tr}((B-\lambda\,A)A_1)=\operatorname{tr}((B-\lambda\, A)A_2)=0$. \\ Indeed, equality $A=[A_1,A_2]$ ensures that $\operatorname{tr}(A\,A_1)=\operatorname{tr}(A\,A_2)=0$ (see Lemma \ref{imad}).
\vskip 2mm \item Let $(A,B)\in \operatorname{M}_n(\mathbb{K})^2$ and $P \in \operatorname{GL}_n(\mathbb{K})$. Setting $\mathcal{G}:=\{B\}^\bot$, we see that the assumption $A \in [\mathcal{G},\mathcal{G}]$ implies $PAP^{-1} \in [P\mathcal{G} P^{-1},P\mathcal{G} P^{-1}]$, while $P\mathcal{G} P^{-1}=\{PBP^{-1}\}^\bot$. \end{enumerate}
Now, let us give a rough idea of the proof strategy. One fixes $A \in \mathfrak{sl}_n(\mathbb{K})$ and aims at proving that $A \in [\mathcal{H},\mathcal{H}]$. We fix a non-zero matrix $B$ such that $\mathcal{H}=\{B\}^\bot$.
Our basic strategy is the Albert-Muckenhoupt method: we try to find a cyclic matrix $M$ in $\mathcal{H}$ such that $A \in \operatorname{Im}(\operatorname{ad}_M)$; if $A \not\in \operatorname{ad}_M(\mathcal{H})$, then we learn that $\mathcal{C}(M) \subset \mathcal{H}$ (see principle (1) above), which yields additional information on $B$. Most of the time, we will search for such a cyclic matrix $M$ among the nilpotent matrices with rank $n-1$. The most favorable situation is the one where $A$ is either upper-triangular or Hessenberg with enough non-zero sub-diagonal entries: in these cases, we search for a good matrix $M$ among the strictly upper-triangular matrices with rank $n-1$ (see Lemma \ref{prelimlemma}). If this method yields no solution, then we learn precious information on the simultaneous reduction of the endomorphisms $X \mapsto AX$ and $X \mapsto BX$. Using changes of bases, we shall see that either the above method delivers a solution for a pair $(A',B')$ that is simultaneously similar to $(A,B)$, in which case Principle (3) shows that we have a solution for $(A,B)$, or $(I_n,A,B)$ is locally linearly dependent (see the definition below), or else $n=3$ and $A$ is similar to $\lambda I_3+E_{2,3}$ for some $\lambda \in \mathbb{K}$. When $(I_n,A,B)$ is locally linearly dependent and $A$ is not of that special type, one uses the classification of locally linearly dependent triples to reduce the situation to the one where $B=I_n$, that is $\mathcal{H}=\mathfrak{sl}_n(\mathbb{K})$, and in that case the proof is completed by invoking Theorem \ref{hypercan}. Finally, the case when $A$ is similar to $\lambda I_3+E_{2,3}$ for some $\lambda \in \mathbb{K}$ will be dealt with independently (Section \ref{specialcasesection}) by applying Albert and Muckenhoupt's method for well-chosen companion matrices instead of a Jordan nilpotent matrix.
Let us finish these strategic considerations by recalling the notion of local linear dependence:
\begin{Def} Given vector spaces $U$ and $V$, linear maps $f_1,\dots,f_n$ from $U$ to $V$ are called locally linearly dependent (in abbreviated form: LLD) when the vectors $f_1(x),\dots,f_n(x)$ are linearly dependent for all $x \in U$. \end{Def}
We adopt a similar definition for matrices by referring to the linear maps that are canonically associated with these matrices.
\subsection{The basic lemma}
\begin{lemma}\label{prelimlemma} Let $(A,B) \in \frak{sl}_n(\mathbb{K})^2$ be with $B=(b_{i,j}) \neq 0$, and set $\mathcal{H}:=\{B\}^\bot$. In each one of the following cases, $A$ belongs to $[\mathcal{H},\mathcal{H}]$: \begin{enumerate}[(a)] \item $\# \mathbb{K}>2$, $A$ is upper-triangular and $B$ is not Hessenberg. \item $\# \mathbb{K}>3$, $A$ is Hessenberg and there exist $i \in \mathopen{[\![} 2,n-1\mathclose{]\!]}$ and $j \in \mathopen{[\![} 3,n\mathclose{]\!]} \smallsetminus \{i\}$ such that $\{1,i\} \subset \ell(A)$ and $b_{j,1} \neq 0$. \end{enumerate} \end{lemma}
\begin{proof} We use a \emph{reductio ad absurdum}, assuming that $A \not\in [\mathcal{H},\mathcal{H}]$. We write $A=(a_{i,j})$.
\begin{enumerate}[(a)] \item Assume that $\# \mathbb{K}>2$, that $A$ is upper-triangular and that $B$ is not Hessenberg. We choose a pair $(l,l')\in \mathopen{[\![} 1,n\mathclose{]\!]}^2$ such that $b_{l,l'}\neq 0$, with $l-l'$ maximal for such pairs. Thus, $l-l'>1$. Let $(x_1,\dots,x_{n-1}) \in (\mathbb{K}^*)^{n-1}$, and set $$\beta:=\frac{\underset{k=1}{\overset{n-1}{\sum}} b_{k+1,k}\,x_k}{b_{l,l'}} \quad \text{and} \quad M:=\underset{k=1}{\overset{n-1}{\sum}}x_k\,E_{k,k+1}-\beta\,E_{l',l.}$$ We see that $M$ is nilpotent of rank $n-1$, and hence it is cyclic. One notes that $M \in \mathcal{H}$. Moreover, $\operatorname{tr}(A M^k)=0$ for all $k \geq 1$, because $A$ is upper-triangular and $M$ is strictly upper-triangular, whereas $\operatorname{tr}(A)=0$ by assumption. Thus, $A \in \operatorname{Im} (\operatorname{ad}_M)$. As it is assumed that $A \not\in \operatorname{ad}_M(\mathcal{H})$, one deduces from principle (1) in Section \ref{proofstrategy} that $\mathcal{C}(M) \subset \mathcal{H}$; in particular $\operatorname{tr}(M^{l-l'} B)=0$, which, as $b_{i,j}=0$ whenever $i-j>l-l'$, reads $$b_{l-l'+1,1}\,x_1x_2 \cdots x_{l-l'}+b_{l-l'+2,2}\,x_2x_3\cdots x_{l-l'+1}+\cdots + b_{n,n-l+l'}\,x_{n-l+l'}\cdots x_{n-1}=0.$$ Here, we have a polynomial with degree at most $1$ in each variable $x_i$, and this polynomial vanishes at every $(x_1,\dots,x_{n-1}) \in (\mathbb{K}^*)^{n-1}$, with $\# \mathbb{K}^* \geq 2$. It follows that $b_{i,j}=0$ for all $(i,j)\in \mathopen{[\![} 1,n\mathclose{]\!]}^2$ with $i-j=l-l'$, and the special case $(i,j)=(l,l')$ yields a contradiction.
\vskip 2mm \item Now, we assume that $\# \mathbb{K}>3$, that $A$ is Hessenberg and that there exist $i \in \mathopen{[\![} 2,n\mathclose{]\!]}$ and $j \in \mathopen{[\![} 3,n\mathclose{]\!]} \smallsetminus \{i\}$ such that $\{1,i\} \subset \ell(A)$ and $b_{j,1} \neq 0$. The proof strategy is similar to the one of case (a), with additional technicalities. One chooses a pair $(l,l')\in \mathopen{[\![} 1,n\mathclose{]\!]}^2$ such that $b_{l,l'} \neq 0$, with $l-l'$ maximal for such pairs (again, the assumptions yield $l-l'\geq j-1>1$). As $a_{2,1} \neq 0$, no generality is lost in assuming that $a_{2,1}=1$. We introduce the formal polynomial $$\mathbf{p}:=\underset{k=1}{\overset{n-2}{\sum}}a_{k+2,k+1}\,\mathbf{x}_k \in \mathbb{K}[\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_{n-2}].$$ Let $(x_1,\dots,x_{n-2}) \in (\mathbb{K}^*)^{n-2}$, and set $$\alpha:=\mathbf{p}(x_1,\dots,x_{n-2}) \quad \text{and} \quad \beta:=\frac{\alpha\,b_{2,1}- \underset{k=1}{\overset{n-2}{\sum}}x_k \, b_{k+2,k+1}}{b_{l,l'}}\cdot$$ Finally, set $$M:=-\alpha\,E_{1,2}+\underset{k=1}{\overset{n-2}{\sum}}x_k\,E_{k+1,k+2}+\beta\,E_{l',l}.$$ The definition of $M$ shows that $\operatorname{tr}(MA)=\operatorname{tr}(MB)=0$, and in particular $M \in \mathcal{H}$. Assume now that $\mathbf{p}(x_1,\dots,x_{n-2}) \neq 0$. Then, $M$ is cyclic as it is nilpotent with rank $n-1$. As $A$ is Hessenberg, we also see that $\operatorname{tr}(M^k\,A)=0$ for all $k \geq 2$. Thus, $\operatorname{tr}(M^kA)=0$ for every non-negative integer $k$, and hence Lemma \ref{imadcyclic} yields $A \in \operatorname{Im} (\operatorname{ad}_M)$. It ensues that $\mathcal{C}(M) \subset \mathcal{H}$, and in particular $\operatorname{tr}(M^{j-1} B)=0$. As $l-l'>1$, we see that, for all $(a,b)\in \mathopen{[\![} 1,n\mathclose{]\!]}^2$ with $b-a \leq l-l'$, and every integer $c>1$, the matrices $M^c$ and $\Bigl(-\alpha\,E_{1,2}+\underset{k=1}{\overset{n-2}{\sum}}x_k\,E_{k+1,k+2}\Bigr)^c$ have the same entry at the $(a,b)$-spot; in particular, for all $k \in \mathopen{[\![} 2,n-j+1\mathclose{]\!]}$, the entry of $M^{j-1}$ at the $(k,j+k-1)$-spot is $x_{k-1}x_k \cdots x_{k-3+j}$, and the entry of $M^{j-1}$ at the $(1,j)$-spot is $-\alpha\, x_1\cdots x_{j-2}$; moreover, for all $(a,b)\in \mathopen{[\![} 1,n\mathclose{]\!]}^2$ with $b-a \leq \ell-\ell'$ and $b-a \neq j-1$, the entry of $M^{j-1}$ at the $(a,b)$-spot is $0$. Therefore, equality $\operatorname{tr}(M^{j-1}B)=0$ yields $$-b_{j,1}\,\alpha\,x_1\cdots x_{j-2}+b_{j+1,2}\,x_1\cdots x_{j-1}+ b_{j+2,3}\,x_2\cdots x_j+\cdots+b_{n,n-j+1}\,x_{n-j}\cdots x_{n-2}=0.$$ We conclude that we have established the following identity: for the polynomial $$\mathbf{q}:=\mathbf{p}\times \Bigl(-b_{j,1}\,\mathbf{p}\,\mathbf{x}_1\cdots \mathbf{x}_{j-2}+b_{j+1,2}\,\mathbf{x}_1\cdots \mathbf{x}_{j-1}+ b_{j+2,3}\,\mathbf{x}_2\cdots \mathbf{x}_j+\cdots+b_{n,n-j+1}\,\mathbf{x}_{n-j}\cdots \mathbf{x}_{n-2}\Bigr),$$ we have $$\forall (x_1,\dots,x_{n-2}) \in (\mathbb{K}^*)^{n-2}, \quad \mathbf{q}(x_1,\dots,x_{n-2})=0.$$ Noting that $\mathbf{q}$ has degree at most $3$ in each variable, we split the discussion into two main cases.
\textbf{Case 1. $\# \mathbb{K}>4$.} \\ Then, $\# \mathbb{K}^*> 3$ and hence $\mathbf{q}=0$. As $\mathbf{p} \neq 0$ (remember that $a_{i+1,i}\neq 0$), it follows that $$-b_{j,1}\,\mathbf{p}\,\mathbf{x}_1\cdots \mathbf{x}_{j-2}+b_{j+1,2}\,\mathbf{x}_1\cdots \mathbf{x}_{j-1}+ b_{j+2,3}\,\mathbf{x}_2\cdots \mathbf{x}_j+\cdots+b_{n,n-j+1}\,\mathbf{x}_{n-j}\cdots \mathbf{x}_{n-2}=0.$$ As $b_{j,1} \neq 0$, identifying the coefficients of the monomials of type $\mathbf{x}_1\cdots \mathbf{x}_{j-2}\mathbf{x}_k$ with $k \in \mathopen{[\![} 1,n-2\mathclose{]\!]} \smallsetminus \{j-1\}$ leads to $a_{k+2,k+1}=0$ for all such $k$. This contradicts the assumption that $a_{i+1,i} \neq 0$.
\vskip 2mm \textbf{Case 2. $\# \mathbb{K}=4$.} \\ A polynomial of $\mathbb{K}[t]$ which vanishes at every non-zero element of $\mathbb{K}$ must be a multiple of $t^3-1$. In particular, if such a polynomial has degree at most $3$, we may write it as $\alpha_3\,t^3+\alpha_2\,t^2+\alpha_1\,t+\alpha_0$, and we obtain $\alpha_3=-\alpha_0$. From there, we split the discussion into two subcases.
\noindent \textbf{Subcase 2.1.} $i >j$. \\ Then, $\mathbf{q}$ has degree at most $2$ in $\mathbf{x}_{i-1}$. Thus, if we see $\mathbf{q}$ as a polynomial in the sole variable $\mathbf{x}_{i-1}$, the coefficients of this polynomial must vanish for every specialization of $\mathbf{x}_1,\dots,\mathbf{x}_{i-2},\mathbf{x}_i,\dots,\mathbf{x}_{n-2}$ in $\mathbb{K}^*$; extracting the coefficients of $(\mathbf{x}_{i-1})^2$ leads to the identity $$\forall (x_1,\dots,x_{i-2},x_i,\dots,x_{n-2})\in (\mathbb{K}^*)^{n-3}, \quad -b_{j,1} (a_{i+1,i})^2 \,x_1\cdots x_{j-2}+\mathbf{r}(x_1,\dots,x_{n-2})=0$$ where $\mathbf{r}=\underset{k=i-j+1}{\overset{n-j}{\sum}} a_{i+1,i}\,b_{j+k,k+1}\, \mathbf{x}_k \cdots \mathbf{x}_{i-2} \mathbf{x}_{i} \cdots \mathbf{x}_{j-2+k}$. Noting that the degree of $-b_{j,1} (a_{i+1,i})^2\, \mathbf{x}_1\cdots \mathbf{x}_{j-2}+\mathbf{r}$ is at most $1$ in each variable, we deduce that this polynomial is zero. This contradicts the fact that the coefficient of $\mathbf{x}_1\cdots \mathbf{x}_{j-2}$ is $-b_{j,1} (a_{i+1,i})^2$, which is non-zero according to our assumptions.
\noindent \textbf{Subcase 2.2.} $i<j$. \\ Let us fix $x_1,\dots,x_{i-2},x_i,\dots,x_{n-2}$ in $\mathbb{K}^*$. The coefficient of $\mathbf{q}(x_1,\dots,x_{i-2},\mathbf{x}_{i-1},x_i,\dots,x_{n-2})$ with respect to $(\mathbf{x}_{i-1})^3$ is $$-b_{j,1}(a_{i+1,i})^2\,x_1\cdots x_{i-2}x_i\cdots x_{j-2}.$$ One the other hand, with $$\mathbf{s}:=\underset{i \leq k \leq n-j}{\sum}b_{j+k,k+1} \underset{\ell=k}{\overset{j-2+k}{\prod}} \mathbf{x}_\ell,$$ the coefficient of $\mathbf{q}(x_1,\dots,x_{i-2},\mathbf{x}_{i-1},x_i,\dots,x_{n-2})$ with respect to $(\mathbf{x}_{i-1})^0$ is $$\mathbf{s}(x_1,\dots,x_{i-2},x_i,\dots,x_{n-2})\,\underset{k \in \mathopen{[\![} 1,n-2\mathclose{]\!]} \smallsetminus \{i-1\}}{\sum}\,a_{k+2,k+1}\, x_k.$$ Therefore, \begin{multline*} \forall (x_1,\dots,x_{n-2})\in (\mathbb{K}^*)^{n-2}, \quad \\ b_{j,1}(a_{i+1,i})^2\,x_1\cdots x_{i-2}x_i\cdots x_{j-2} =\mathbf{s}(x_1,\dots,x_{i-2},x_i,\dots,x_{n-2}) \\ \times \underset{k \in \mathopen{[\![} 1,n-2\mathclose{]\!]} \smallsetminus \{i-1\}}{\sum}\,a_{k+2,k+1}\, x_k. \end{multline*} On both sides of this equality, we have polynomials of degree at most $2$ in each variable. As $\# (\mathbb{K}^*) >2$, we deduce the identity $$b_{j,1}(a_{i+1,i})^2\,\mathbf{x}_1\dots \mathbf{x}_{i-2}\mathbf{x}_i\cdots \mathbf{x}_{j-2} =\mathbf{s}\times \underset{k \in \mathopen{[\![} 1,n-2\mathclose{]\!]} \smallsetminus \{i-1\}}{\sum}\,a_{k+2,k+1}\, \mathbf{x}_k.$$ However, on the left-hand side of this identity is a non-zero homogeneous polynomial of degree $j-3$, whereas its right-hand side is a homogeneous polynomial of degree $j$. There lies a final contradiction. \end{enumerate} \end{proof}
\subsection{Reduction to the case when $I_n,A,B$ are locally linearly dependent}
In this section, we use Lemma \ref{prelimlemma} to prove the following result:
\begin{lemma}\label{reductionlemma} Assume that $\# \mathbb{K} > 3$, let $(A,B) \in \frak{sl}_n(\mathbb{K})^2$ be such that $B \neq 0$, and set $\mathcal{H}:=\{B\}^\bot$. Then, either $A \in [\mathcal{H},\mathcal{H}]$, or $(I_n,A,B)$ is LLD, or $A$ is similar to $\lambda I_3+E_{2,3}$ for some $\lambda \in \mathbb{K}$. \end{lemma}
In order to prove Lemma \ref{reductionlemma}, one needs two preliminary results. The first one is a basic result in the theory of matrix spaces with rank bounded above.
\begin{lemma}[Lemma 2.4 of \cite{dSPLLD1}]\label{decompositionlemma} Let $m,n,p,q$ be positive integers, and $\mathcal{V}$ be a linear subspace of $\operatorname{M}_{m+p,n+q}(\mathbb{K})$ in which every matrix splits up as $$M=\begin{bmatrix} A(M) & [?]_{m \times q} \\ [0]_{p \times n} & B(M) \end{bmatrix}$$ where $A(M) \in \operatorname{M}_{m,n}(\mathbb{K})$ and $B(M) \in \operatorname{M}_{p,q}(\mathbb{K})$. Assume that there is an integer $r$ such that $\forall M \in \mathcal{V}, \; \operatorname{rk} M \leq r <\# \mathbb{K}$, and set $s:=\max \{\operatorname{rk} A(M) \mid M \in \mathcal{V}\}$ and $t:=\max \{\operatorname{rk} B(M) \mid M \in \mathcal{V}\}$. Then, $s+t \leq r$. \end{lemma}
\begin{lemma}\label{eigenvectorslemma} Assume that $\# \mathbb{K} \geq 3$. Let $V$ be a vector space over $\mathbb{K}$ and $u$ be an endomorphism of $V$ that is not a scalar multiple of the identity. Then, there are two linearly independent non-eigenvectors of $u$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{eigenvectorslemma}] As $u$ is not a scalar multiple of the identity, some vector $x \in V \smallsetminus \{0\}$
is not an eigenvector of $u$. Then, the $2$-dimensional subspace $P:=\operatorname{span}(x,u(x))$ contains $x$. As $u_{|P}$ is not a scalar multiple of the identity, $u$ stabilizes at most two $1$-dimensional subspaces of $P$. As $\# \mathbb{K}>2$, there are at least four $1$-dimensional subspaces of $P$, whence at least two of them are not stable under $u$. This proves our claim. \end{proof}
Now, we are ready to prove Lemma \ref{reductionlemma}.
\begin{proof}[Proof of Lemma \ref{reductionlemma}] Throughout the proof, we assume that $A \not\in [\mathcal{H},\mathcal{H}]$ and that there is no scalar $\lambda$ such that $A$ is similar to $\lambda I_3+E_{2,3}$. Our aim is to show that $(I_n,A,B)$ is LLD.
Note that, for all $P \in \operatorname{GL}_n(\mathbb{K})$, no pair $(M,N)\in \operatorname{M}_n(\mathbb{K})^2$ satisfies both $[M,N]=P^{-1}AP$ and $\operatorname{tr}((P^{-1}BP) M)=\operatorname{tr}((P^{-1}BP) N)=0$.
Let us say that a vector $x \in \mathbb{K}^n$ has \textbf{order $3$} when $\operatorname{rk}(x,Ax,A^2x)=3$. Let $x \in \mathbb{K}^n$ be of order $3$. Then, $(x,Ax,A^2x)$ may be extended into a basis $\mathbf{B}=(x_1,x_2,x_3,x_4,\dots,x_n)$ of $\mathbb{K}^n$ such that $A':=P_\mathbf{B}^{-1}AP_\mathbf{B}$ is Hessenberg\footnote{One finds such a basis by induction as follows: one sets $(x_1,x_2,x_3):=(x,Ax,A^2x)$ and, given $k \in \mathopen{[\![} 4,n\mathclose{]\!]}$ such that $x_1,\dots,x_{k-1}$ are defined, one sets $x_k:=Ax_{k-1}$ if $Ax_{k-1}\not\in \operatorname{span}(x_1,\dots,x_{k-1})$, otherwise one chooses an arbitrary vector $x_k \in \mathbb{K}^n \smallsetminus \operatorname{span}(x_1,\dots,x_{k-1})$.}. Moreover, one sees that $\{1,2\} \subset \ell(A')$. Applying point (a) of Lemma \ref{prelimlemma}, one obtains that the entries in the first column of $P^{-1}_\mathbf{B}\,B\,P_\mathbf{B}$ are all zero starting from the third one, which means that $Bx \in \operatorname{span}(x,Ax)$.
Let now $x \in \mathbb{K}^n$ be a vector that is not of order $3$. If $x$ and $Ax$ are linearly dependent, then $x$, $Ax$, $Bx$ are linearly dependent. Thus, we may assume that $\operatorname{rk}(x,Ax)=2$ and $A^2 x \in \operatorname{span}(x,Ax)$. We split $\mathbb{K}^n=\operatorname{span}(x,Ax)\oplus F$ and we choose a basis $(f_3,\dots,f_n)$ of $F$. For $\mathbf{B}:=(x,Ax,f_3,\dots,f_n)$, we now have, for some $(\alpha,\beta)\in \mathbb{K}^2$ and some $N \in \operatorname{M}_{n-2}(\mathbb{K})$, $$P_{\mathbf{B}}^{-1}\,A\,P_\mathbf{B}=\begin{bmatrix} K & [?]_{2 \times (n-2)} \\ [0]_{(n-2) \times 2} & N \end{bmatrix} \quad \text{where $K=\begin{bmatrix} 0 & \alpha \\ 1 & \beta \end{bmatrix}$.}$$ From there, we split the discussion into several cases, depending on the form of $N$ and its relationship with $K$.
\noindent \textbf{Case 1.} $N \not\in \mathbb{K} I_{n-2}$. \\ Then, there is a vector $y \in \mathbb{K}^{n-2}$ for which $y$ and $Ny$ are linearly independent. Denoting by $z$ the vector of $F$ with coordinate list $y$ in $(f_3,\dots,f_n)$, one obtains $\operatorname{rk}(x,Ax,z,Az)=4$, and hence one may extend $(x,Ax,z,Az)$ into a basis $\mathbf{B}'$ of $\mathbb{K}^n$ such that $A':=P_{\mathbf{B}'}^{-1}AP_{\mathbf{B}'}$ is Hessenberg with $\{1,3\} \subset \ell(A')$. Point (b) of Lemma \ref{prelimlemma} shows that, in the first column of $P_{\mathbf{B}'}^{-1}BP_{\mathbf{B}'}$, all the entries must be zero starting from the fourth one, yielding $Bx \in \operatorname{span}(x,Ax,z)$. As $N \not\in \mathbb{K} I_{n-2}$, we know from Lemma \ref{eigenvectorslemma} that we may find another vector $z' \in F \smallsetminus \mathbb{K} z$ such that $\operatorname{rk}(x,Ax,z',Az')=4$, which yields $Bx \in \operatorname{span}(x,Ax,z')$. Thus, $Bx \in \operatorname{span}(x,Ax,z) \cap \operatorname{span}(x,Ax,z')=\operatorname{span}(x,Ax)$.
\vskip 3mm \noindent \textbf{Case 2.} $N=\lambda\,I_{n-2}$ for some $\lambda \in \mathbb{K}$. \\ \textbf{Subcase 2.1.} $\lambda$ is not an eigenvalue of $K$. \\ Then, $G:=\operatorname{Ker}(A-\lambda I_n)$ has dimension $n-2$. For $z \in \mathbb{K}^n$, denote by $p_z$ the monic generator of the ideal $\{q \in \mathbb{K}[t] : \; q(A)z=0\}$. Recall that, given $y$ and $z$ in $\mathbb{K}^n$ for which $p_y$ and $p_z$ are mutually prime, one has $p_{y+z}=p_y p_z$. In particular, as $p_x$ has degree $2$, $p_z$ has degree $3$ for every $z \in (\mathbb{K} x\oplus G) \smallsetminus (\mathbb{K} x \cup G)$, that is every $z$ in $(\mathbb{K} x\oplus G) \smallsetminus (\mathbb{K} x \cup G)$ has order $3$; thus, $\operatorname{rk}(z,Az,Bz) \leq 2$ for all such $z$. Moreover, it is obvious that $\operatorname{rk}(z,Az,Bz) \leq 2$ for all $z \in G$.
Let us choose a non-zero linear form $\varphi$ on $\mathbb{K} x\oplus G$ such that $\varphi(x)=0$. For every $z \in \mathbb{K} x\oplus G$, set $$M(z)=\begin{bmatrix} \varphi(z) & 0 & 0 & 0 \\ [0]_{n \times 1} & z & Az & Bz \end{bmatrix} \in \operatorname{M}_{n+1,4}(\mathbb{K}).$$ Then, with the above results, we know that $\operatorname{rk} M(z) \leq 3$ for all $z \in \mathbb{K} x\oplus G$. On the other hand, $\max \{\operatorname{rk} \varphi(z) \mid z \in (\mathbb{K} x\oplus G)\}=1$. Using Lemma \ref{decompositionlemma}, we deduce that $\operatorname{rk}(z,Az,Bz) \leq 2$ for all $z \in \mathbb{K} x\oplus G$. In particular, $\operatorname{rk} (x,Ax,Bx) \leq 2$.
\vskip 2mm \noindent \textbf{Subcase 2.2.} $\lambda$ is an eigenvalue of $K$ with multiplicity $1$. \\ Then, there are eigenvectors $y$ and $z$ of $A$, with distinct corresponding eigenvalues, such that $x=y+z$. Thus, $(y,z)$ may be extended into a basis $\mathbf{B}'$ of $\mathbb{K}^n$ such that $P_{\mathbf{B}'}^{-1} A P_{\mathbf{B}'}$ is upper-triangular. It follows from point (a) of Lemma \ref{prelimlemma} that $P_{\mathbf{B}'}^{-1} B P_{\mathbf{B}'}$ is Hessenberg, and in particular $By \in \operatorname{span}(y,z)$. Starting from $(z,y)$ instead of $(y,z)$, one finds $Bz \in \operatorname{span}(y,z)$. Therefore, all the vectors $y+z$, $A(y+z)$ and $B(y+z)$ belong to the $2$-dimensional space $\operatorname{span}(y,z)$, which yields $\operatorname{rk} (x,Ax,Bx) \leq 2$.
\vskip 2mm \noindent \textbf{Subcase 2.3.} $\lambda$ is an eigenvalue of $K$ with multiplicity $2$ . \\ Then, the characteristic polynomial of $A$ is $(t-\lambda)^n$. \begin{itemize} \item Assume that $n \geq 4$. One chooses an eigenvector $y$ of $A$ in $\operatorname{span}(x,Ax)$, so that $(y,x)$ is a basis of $\operatorname{span}(x,Ax)$. Then, one chooses an arbitrary non-zero vector $u \in F$, and one extends $(y,x,u)$ into a basis $\mathbf{B}'$ of $\mathbb{K}^n$ such that $P_{\mathbf{B}'}^{-1} A P_{\mathbf{B}'}$ is upper-triangular. Applying point (a) of Lemma \ref{prelimlemma} once more yields $Bx \in \operatorname{span}(y,x,u)=\operatorname{span}(x,Ax,u)$. As $n \geq 4$, we can choose another vector $v \in F \smallsetminus \mathbb{K} u$, and the above method yields $Bx \in \operatorname{span}(x,Ax,v)$, while $x,Ax,u,v$ are linearly independent. Therefore, $Bx \in \operatorname{span}(x,Ax,u) \cap \operatorname{span}(x,Ax,v)=\operatorname{span}(x,Ax)$.
\item Finally, assume that $n=3$. As $A$ is not similar to $\lambda I_3+E_{2,3}$, the only remaining option is that $\operatorname{rk}(A-\lambda I_3)=2$. Then, we can find a linear form $\varphi$ on $\mathbb{K}^3$ with kernel $\operatorname{Ker} (A-\lambda I_3)^2$. Every vector $z \in \mathbb{K}^3 \smallsetminus \operatorname{Ker} (A-\lambda I_3)^2$ has order $3$. Therefore, for every $z \in \mathbb{K}^3$, either $\varphi(z)=0$ or $\operatorname{rk} (z,Az,Bz)\leq 2$. With the same line of reasoning as in Subcase 2.1, we obtain $\operatorname{rk} (x,Ax,Bx)\leq 2$. This completes the proof. \end{itemize} \end{proof}
Thus, only two situations are left to consider: the one where $(I_n,A,B)$ is LLD, and the one where $A$ is similar to $\lambda I_3+E_{2,3}$ for some $\lambda \in \mathbb{K}$. They are dealt with separately in the next two sections.
\subsection{The case when $(I_n,A,B)$ is locally linearly dependent}
In order to analyze the situation where $(I_n,A,B)$ is LLD, we use the classification of LLD triples over fields with more than $2$ elements (this result is found in \cite{dSPLLD2}; prior to that, the result was known for infinite fields \cite{BresarSemrl} and for fields with more than $4$ elements \cite{ChebotarSemrl}).
\begin{theo}[Classification theorem for LLD triples]\label{LLDclass} Let $(f,g,h)$ be an LLD triple of linear operators from a vector space $U$ to a vector space $V$, where the underlying field has more than $2$ elements. Assume that $f,g,h$ are linearly independent and that $\operatorname{Ker} f \cap \operatorname{Ker} g \cap \operatorname{Ker} h=\{0\}$ and $\operatorname{Im} f+\operatorname{Im} g+\operatorname{Im} h=V$. Then: \begin{enumerate}[(a)] \item Either there is a $2$-dimensional subspace $\mathcal{P}$ of $\operatorname{span}(f,g,h)$ and a $1$-dimensional subspace $\mathcal{D}$ of $V$ such that $\operatorname{Im} u \subset \mathcal{D}$ for all $u \in \mathcal{P}$; \item Or $\dim V \leq 2$; \item Or $\dim U=\dim V=3$ and there are bases of $U$ and $V$ in which the operator space $\operatorname{span}(f,g,h)$ is represented by the space $\operatorname{A}_3(\mathbb{K})$ of all $3 \times 3$ alternating matrices. \end{enumerate} \end{theo}
\begin{cor}\label{LLDavecid} Assume that $\# \mathbb{K}>2$, and let $A$ and $B$ be matrices of $\operatorname{M}_n(\mathbb{K})$, with $n \geq 3$, such that $(I_n,A,B)$ is LLD. Then, either $I_n,A,B$ are linearly dependent, or there is a $1$-dimensional subspace $\mathcal{D}$ of $\mathbb{K}^n$ and scalars $\lambda$ and $\mu$ such that $\operatorname{Im}(A-\lambda I_n)=\mathcal{D}=\operatorname{Im}(B-\mu I_n)$. \end{cor}
\begin{proof} Assume that $I_n,A,B$ are linearly independent. As $\operatorname{Ker} I_n=\{0\}$ and $\operatorname{Im} I_n=\mathbb{K}^n$, we are in the position to use Theorem \ref{LLDclass}. Moreover, $\operatorname{rk} I_n>2$ discards Cases (b) and (c) altogether (as no $3 \times 3$ alternating matrix is invertible). Therefore, we have a $2$-dimensional subspace $\mathcal{P}$ of $\operatorname{span}(I_n,A,B)$ and a $1$-dimensional subspace $\mathcal{D}$ of $\mathbb{K}^n$ such that $\operatorname{Im} M \subset \mathcal{D}$ for all $M \in \mathcal{P}$. In particular $I_n \not\in \mathcal{P}$, whence $\operatorname{span}(I_n,A,B)=\mathbb{K} I_n \oplus \mathcal{P}$. This yields a pair $(\lambda,M_1)\in \mathbb{K} \times \mathcal{P}$ such that $A=\lambda I_n+M_1$, and hence $\operatorname{Im}(A-\lambda I_n) \subset \mathcal{D}$. As $A-\lambda I_n \neq 0$ (we have assumed that $I_n,A,B$ are linearly independent), we deduce that $\operatorname{Im}(A-\lambda I_n) =\mathcal{D}$. Similarly, one finds a scalar $\mu$ such that $\operatorname{Im}(B-\mu I_n) =\mathcal{D}$. \end{proof}
From there, we can prove the following result as a consequence of Theorem \ref{hypercan}:
\begin{lemma}\label{LLDcase} Assume that $\# \mathbb{K} >3$ and $n \geq 3$. Let $(A,B) \in \frak{sl}_n(\mathbb{K})^2$ be with $B \neq 0$, and set $\mathcal{H}:=\{B\}^\bot$. Assume that $(I_n,A,B)$ is LLD and that $A$ is not similar to $\lambda I_3+E_{2,3}$ for some $\lambda \in \mathbb{K}$. Then, $A \in [\mathcal{H},\mathcal{H}]$. \end{lemma}
\begin{proof} We use a \emph{reductio ad absurdum} by assuming that $A \not\in [\mathcal{H},\mathcal{H}]$. By Corollary \ref{LLDavecid}, we can split the discussion into two main cases.
\noindent \textbf{Case 1.} $I_n,A,B$ are linearly dependent. \\ Assume first that $A \in \mathbb{K} I_n$. Then, $P^{-1}AP$ is upper-triangular for every $P \in \operatorname{GL}_n(\mathbb{K})$, and hence Lemma \ref{prelimlemma} yields that $P^{-1}BP$ is Hessenberg for every such $P$. In particular, let $x \in \mathbb{K}^n \smallsetminus \{0\}$. For every $y \in \mathbb{K}^n \smallsetminus \mathbb{K} x$, we can extend $(x,y)$ into a basis $(x,y,y_3,\dots,y_n)$ of $\mathbb{K}^n$, and hence we learn that $Bx \in \operatorname{span}(x,y)$. Using the basis $(x,y_3,y,y_4,\dots,y_n)$, we also find $Bx \in \operatorname{span}(x,y_3)$, whence $Bx \in \mathbb{K} x$. Varying $x$, we deduce that $B \in \mathbb{K} I_n$, whence $\mathcal{H}=\frak{sl}_n(\mathbb{K})$. Theorem \ref{hypercan} then yields $A \in [\mathcal{H},\mathcal{H}]$, contradicting our assumptions.
Assume now that $A \not\in \mathbb{K} I_n$. Then, there are scalars $\lambda$ and $\mu$ such that $B=\lambda A+\mu I_n$. By Theorem \ref{hypercan}, there are trace zero matrices $M$ and $N$ such that $A=[M,N]$. Thus $\operatorname{tr}((B-\lambda A)M)=\operatorname{tr}((B-\lambda A)N)=0$. Using principle (2) of Section \ref{proofstrategy}, we deduce that $(M,N) \in \mathcal{H}^2$, whence $A \in [\mathcal{H},\mathcal{H}]$.
\vskip 2mm \noindent \textbf{Case 2.} $I_n,A,B$ are linearly independent. \\ By Corollary \ref{LLDavecid}, there are scalars $\lambda$ and $\mu$ together with a $1$-dimensional subspace $\mathcal{D}$ of $\mathbb{K}^n$ such that $\operatorname{Im}(A-\lambda I_n)=\operatorname{Im}(B-\mu I_n)=\mathcal{D}$. In particular, $A-\lambda I_n$ has rank $1$, and hence it is diagonalisable or nilpotent. In any case, $A$ is triangularizable; in the second case, the assumption that $A$ is not similar to $\lambda I_3+E_{2,3}$ leads to $n \geq 4$.
Let $x$ be an eigenvector of $A$. Then, we can extend $x$ into a triple $(x,y,z)$ of linearly independent eigenvectors of $A$ (this uses $n \geq 4$ in the case when $A-\lambda I_n$ is nilpotent). Then, we further extend this triple into a basis $(x,y,z,y_4,\dots,y_n)$ in which $v \mapsto Av$ is upper-triangular. Point (a) in Lemma \ref{prelimlemma} yields $Bx \in \operatorname{span}(x,y)$. With the same line of reasoning, $Bx \in \operatorname{span}(x,z)$, and hence $Bx \in \operatorname{span}(x,y)\cap \operatorname{span}(x,z)=\mathbb{K} x$. Thus, we have proved that every eigenvector of $A$ is an eigenvector of $B$. In particular, $\operatorname{Ker}(A-\lambda I_n)$ is stable under $v \mapsto Bv$, and the resulting endomorphism is a scalar multiple of the identity. This provides us with some $\alpha \in \mathbb{K}$ such that $(B-\alpha I_n)z=0$ for all $z \in \operatorname{Ker}(A-\lambda I_n)$. In particular, $\alpha$ is an eigenvalue of $B$ with multiplicity at least $n-1$, and since $\mu$ shares this property and $n<2(n-1)$, we deduce that $\alpha=\mu$. As $\operatorname{rk}(A-\lambda I_n)=\operatorname{rk}(B-\mu I_n)=1$, we deduce that $\operatorname{Ker}(A-\lambda I_n)=\operatorname{Ker}(B-\mu I_n)$. Thus, $A-\lambda I_n$ and $B-\mu I_n$ are two rank $1$ matrices with the same kernel and the same range, and hence they are linearly dependent. This contradicts the assumption that $I_n,A,B$ be linearly independent, thereby completing the proof. \end{proof}
\subsection{The case when $A=\lambda I_3+E_{2,3}$}\label{specialcasesection}
\begin{lemma}\label{finalcase} Assume that $\# \mathbb{K}>2$. Let $\lambda \in \mathbb{K}$. Assume that $A:=\lambda I_3+E_{2,3}$ has trace zero. Let $B \in \frak{sl}_3(\mathbb{K}) \smallsetminus \{0\}$, and set $\mathcal{H}:=\{B\}^\bot$. Then, $A \in [\mathcal{H},\mathcal{H}]$. \end{lemma}
\begin{proof} We assume that $A \not\in [\mathcal{H},\mathcal{H}]$ and search for a contradiction. By point (a) in Lemma \ref{prelimlemma}, for every basis $\mathbf{B}=(x,y,z)$ of $\mathbb{K}^3$ for which $P_\mathbf{B}^{-1}\,A\,P_\mathbf{B}$ is upper-triangular, we find $Bx \in \operatorname{span}(x,y)$. In particular, for every basis $(x,y)$ of $\operatorname{span}(e_1,e_2)$, the triple $(x,y,e_3)$ qualifies, whence $Bx \in \operatorname{span}(x,y)=\operatorname{span}(e_1,e_2)$. It follows that $\operatorname{span}(e_1,e_2)$ is stable under $B$. As $z \mapsto Az$ is also represented by an upper-triangular matrix in the basis $(e_2,e_3,e_1)$, one finds $Be_2 \in \operatorname{span}(e_2,e_3)$, whence $Be_2 \in \mathbb{K} e_2$. Thus, $B$ has the following shape: $$B=\begin{bmatrix} a & 0 & d \\ b & c & e \\ 0 & 0 & f \end{bmatrix}.$$ From there, we split the discussion into two main cases.
\noindent \textbf{Case 1.} $\lambda = 0$. \\ Using $(e_2,e_1,e_3)$ as our new basis, we are reduced to the case when $$A=\begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \quad \text{and} \quad B=\begin{bmatrix} ? & ? & ? \\ 0 & ? & ? \\ 0 & 0 & ? \end{bmatrix}.$$ Then, one checks that $[J_2,E_{2,3}]=A$, and $\operatorname{tr}(J_2B)=0=\operatorname{tr}(E_{2,3}B)$. This yields $A \in [\mathcal{H},\mathcal{H}]$, contradicting our assumptions.
\vskip 2mm \noindent \textbf{Case 2.} $\lambda \neq 0$. \\ As we can replace $A$ with $\lambda^{-1}A$, which is similar to $I_3+E_{2,3}$, no generality is lost in assuming that $\lambda=1$. According to principle (2) of Section \ref{proofstrategy}, no further generality is lost in subtracting a scalar multiple of $A$ from $B$, to the effect that we may assume that $f=0$ and $B \neq 0$ (if $B$ is a scalar multiple of $A$, then the same principle combined with the Albert-Muckenhoupt theorem shows that $A \in [\mathcal{H},\mathcal{H}]$). As $\operatorname{tr} B=0$, we find that $$B=\begin{bmatrix} a & 0 & d \\ b & -a & e \\ 0 & 0 & 0 \end{bmatrix}.$$ Note finally that $\mathbb{K}$ must have characteristic $3$ since $\operatorname{tr} A=0$.
\noindent \textbf{Subcase 2.1.} $b \neq 0$. \\ As the problem is unchanged in multiplying $B$ with a non-zero scalar, we can assume that $b=1$. Assume furthermore that $d \neq 0$. Let $(\alpha,\beta)\in \mathbb{K}^2$, and set $$C:=\begin{bmatrix} 0 & 1 & 0 \\ \alpha & 0 & 1 \\ \beta & 0 & 0 \end{bmatrix}.$$ Note that $C$ is a cyclic matrix and $$C^2=\begin{bmatrix} \alpha & 0 & 1 \\ \beta & \alpha & 0 \\ 0 & \beta & 0 \end{bmatrix}.$$ Thus, $\operatorname{tr}(AC)=0$, $\operatorname{tr}(BC)=\beta d+1$, $\operatorname{tr}(AC^2)=2\alpha+\beta=\beta-\alpha$ and $\operatorname{tr}(BC^2)=e \beta$. As $d \neq 0$, we can set $\beta:=-d^{-1}$ and $\alpha:=\beta$, so that $\beta\neq 0$ and
$\operatorname{tr}(A)=\operatorname{tr}(AC)=\operatorname{tr}(AC^2)=0$. Thus, $A \in \operatorname{Im} (\operatorname{ad}_C)$ by Lemma \ref{imadcyclic}, and on the other hand $C \in \mathcal{H}$. As $A \not\in [\mathcal{H},\mathcal{H}]$, it follows that $\mathcal{C}(C) \subset \mathcal{H}$, and hence $\operatorname{tr}(BC^2)=0$. As $\beta \neq 0$, this yields $e=0$.
From there, we can find a non-zero scalar $t$ such that $d+t\,a \neq 0$ (because $\# \mathbb{K}>2$). In the basis $(e_1,e_2,e_3+t\,e_1)$, the respective matrices of $z \mapsto Az$ and $z \mapsto Bz$ are $I_3+E_{2,3}$ and $$\begin{bmatrix} a & 0 & d+t\,a \\ 1 & -a & t \\ 0 & 0 & 0 \end{bmatrix}.$$ As $d+t\,a \neq 0$ and $t \neq 0$, we find a contradiction with the above line of reasoning.
Therefore, $d=0$. Then, the matrices of $z \mapsto Az$ and $z \mapsto Bz$ in the basis $(e_1,e_2,e_3+e_1)$ are, respectively, $I_3+E_{2,3}$ and $\begin{bmatrix} a & 0 & a \\ 1 & -a & e+1 \\ 0 & 0 & 0 \end{bmatrix}$. Applying the above proof in that new situation yields $a=0$. Therefore, $$B=\begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & e \\ 0 & 0 & 0 \end{bmatrix}$$ With $(e_3-e\,e_1,e_1,e_2)$ as our new basis, we are finally left with the case when $$A=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} \quad \text{and} \quad B=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}.$$ Set $$C:=\begin{bmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 0 \end{bmatrix}$$ and note that $C$ is cyclic and $$C^2=\begin{bmatrix} 1 & 1 & 1 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{bmatrix}.$$ One sees that $\operatorname{tr}(A)=\operatorname{tr}(AC)=\operatorname{tr}(AC^2)=0$, and hence $A \in \operatorname{Im} (\operatorname{ad}_C)$ by Lemma \ref{imadcyclic}. On the other hand, $\operatorname{tr}(BC)=0$. As $A \not\in [\mathcal{H},\mathcal{H}]$, one should find $\operatorname{tr}(BC^2)=0$, which is obviously false. Thus, we have a final contradiction in that case.
\noindent \textbf{Subcase 2.2.} $b=0$. \\ Assume furthermore that $a \neq 0$. Then, in the basis $(e_1+e_2,e_2,e_3)$, the respective matrices of $z \mapsto Az$ and $z \mapsto Bz$ are $I_3+E_{2,3}$ and $\begin{bmatrix} a & 0 & d \\ -2a & -a & e-d \\ 0 & 0 & 0 \end{bmatrix}$. This sends us back to Subcase 2.1, which leads to another contradiction. Therefore, $a=0$.
If $d=0$, then we see that $B \in \operatorname{span}(I_n,A)$, and hence principle (2) from Section \ref{proofstrategy} combined with Theorem \ref{hypercan} shows that $A \in [\mathcal{H},\mathcal{H}]$, contradicting our assumptions. Thus, $d \neq 0$. Replacing the basis $(e_1,e_2,e_3)$ with $(d\,e_1+e\,e_2,e_2,e_3)$, we are reduced to the case when $$A=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \quad \text{and} \quad B=\begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.$$ In that case, we set $$C:=\begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & -1 \end{bmatrix}$$ which is a cyclic matrix with $$C^2=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & -1 & 1 \end{bmatrix},$$ so that $\operatorname{tr}(A)=\operatorname{tr}(AC)=\operatorname{tr}(AC^2)=0$ and $\operatorname{tr}(BC)=0$. As $\operatorname{tr}(BC^2) \neq 0$, this contradicts again the assumption that $A \not\in [\mathcal{H},\mathcal{H}]$. This final contradiction shows that the initial assumption $A \not\in [\mathcal{H},\mathcal{H}]$ was wrong. \end{proof}
\subsection{Conclusion}
Let $A \in \operatorname{M}_n(\mathbb{K})$ and $B \in \operatorname{M}_n(\mathbb{K})\smallsetminus \{0\}$, where $n \geq 3$ and $\# \mathbb{K} \geq 4$. Set $\mathcal{H}:=\{B\}^\bot$ and assume that $\operatorname{tr}(A)=0$ and $\operatorname{tr}(B)=0$. If $A$ is similar to $\lambda I_3+E_{2,3}$, then we know from Lemma \ref{finalcase} and principle (3) of Section \ref{proofstrategy} that $A \in [\mathcal{H},\mathcal{H}]$. Otherwise, if $(I_n,A,B)$ is LLD then we know from Lemma \ref{LLDcase} that $A \in [\mathcal{H},\mathcal{H}]$. Using Lemma \ref{reductionlemma}, we conclude that $A \in [\mathcal{H},\mathcal{H}]$ in every possible situation. This completes the proof of Theorem \ref{dSPcrochet}.
\end{document} | arXiv |
Could a habitable satellite of a gas giant have a stable subsatellite?
I have set a science fiction story on a moon, orbiting a gas giant (which orbits its star at approximately the same orbit as the Earth around the Sun), and given this moon its own satellites. The moon is Earth-sized, and the moon's satellites are on the order of a large asteroid each (perhaps up to half the size of the Moon … Luna if there was any confusion with the number of times I said 'moon' in this post.)
I am a computer nerd, not an astronomer, so I'm wondering if this is something that is at least possible? Could the orbits of the gas giant around the star, the moon around the gas giant, and the subsatellites around the moon remain stable for the requisite billion or so years for life to develop on the moon? Could the moon be habitable?
orbit natural-satellites
Ilmari Karonen
Tritium21Tritium21
$\begingroup$ I think that it would depend on the exact distances and sizes of the objects involved. $\endgroup$ – Donald.McLean♦ Jul 30 '14 at 13:29
This is a fun question, so I spend some time thinking about it. This is what I came up with.
Yes, this is certainly possible. Just consider the Solar System itself. We have a massive central body, the Sun, and several tiny subsystems (planets with their moons) orbiting it.
Is it stable?
Again, from the simple observation that the Solar System exists in its current form, we know that such a system can remain more or less stable over a significant portion of the stellar lifetime.
A more difficult problem, I think, is if can it be formed in the first place.
Could it be habitable?
This is a tricky question as it critically depends distances between the several orbiting bodies. Lets consider the orbital mechanism and derive from what we know of the Solar System.
In the following I've assumed the gas planet is Jupiter and the fictional planetary system consists of a Sun', Jupiter', Earth' and Moon' such that the apostrophe denotes the fake object. These fake objects are identical to the real equivalents, except for their orbital configuration and relative distances. If you want a different system you can follow the same line of reasoning, but plug in different numbers for the fake objects.
Now, we know that the Earth-Moon system exists in a stable orbit around the Sun, which means that at 1 AU the gravitational force of the Sun is not strong enough to destabilize the Moon orbit. From this we can find a lower limit on the gravitational force that Jupiter' may exert on the Moon' \begin{aligned} F_{J'M'} &= F_{SM} \\ \frac{ M_{J'} M_{M'}}{r_{J'M'}^2} &= \frac{ M_{S} M_{M}}{r_{SM}^2} \end{aligned} Solving this equation we get \begin{aligned} r_{J'M'} &= r_{SM} \left( \frac{ M_{J'} M_{M'} }{ M_{S} M_{M} } \right)^{1/2} \\ &= r_{SM} \left( \frac{ M_{J'} }{ M_{S} } \right)^{1/2} \end{aligned} which evaluates to about 0.03 AU. So the Earth'-Moon' system should have an orbital radius around Jupiter' of at least 0.03 AU.
Now the question is if the Earth-Moon system can remain bound in its orbit around Jupiter. To answer this we can safely ignore the Moon' and assume it remains tightly bound to the Earth'. The potentially destabilizing force on the Jupiter'-Earth' system is again the gravitational pull of the Sun', but now on the Earth'. We can do a similar trick as before, but now we need to account for a different mass ratio. So lets require that the fraction of forces be equal in both cases: $$ \frac{F_{SM}}{F_{EM}} = \frac{F_{S'E'}}{F_{J'E'}} $$ with some algebra this gives $$ r_{S'J'} = r_{J'E'} \frac{r_{SM}}{r_{EM}} \left(\frac{M_E}{M_J}\right)^{1/2} $$ which for $r_{J'E'} = $0.03 AU evaluates to a minimum orbital radius of about 0.7 AU.
This greatly surprised me as it means you can comfortably fit the entire Jupiter'-Earth'-Moon' system inside the habitable zone, meaning that in principle life should be possible.
Of course the seasons on such a planet would be rather extreme.
Michael B.Michael B.
$\begingroup$ That is very interesting. Habitability is more complex, though, due in part from gravitational tides and heat from the gas giant. .03 Au is closer to Jupiter than Europa is, and we know those moons experience significant tidal heating. So we have to do a combination of pushing both systems further out, and whether we still end up at habitable distances is unclear. But as far as sci-fi goes, I'd say this puts the matter at "sufficiently plausible". $\endgroup$ – zibadawa timmy Aug 5 '14 at 14:34
$\begingroup$ @zibadawatimmy - Actually 0.03 AU is about 2.5 times further out than Callisto and nearly 7 times the orbital radius of Europa. By the very nature of this calculation the tidal effects caused by Jupiter' are -exactly- the same as those the Sun exerts on Earth. You're right about habitability though, although I think the main factor will be the highly variable irradiation received from both the Sun' and Jupiter'. $\endgroup$ – Michael B. Aug 5 '14 at 17:29
$\begingroup$ Perhaps my brain dropped a digit in the AU conversion. I'm doubtful that the tidal forces are the same across the entire surface, but they would at least be less prominent than I thought. $\endgroup$ – zibadawa timmy Aug 5 '14 at 17:35
Not the answer you're looking for? Browse other questions tagged orbit natural-satellites or ask your own question.
Likelihood of a stable system with a dwarf planet's orbit inside that of a gas giant
What would be the dynamics of a double-planet system, similar to Earth / Moon, but with both bodies nearly Earth-sized?
Can a gas giant have an other gas planet as satellite?
Shouldn't all moon orbits be inherently unstable?
How to cool down a moon?
Simple mathematical models of solar systems
Is it possible for a planet to have multiple moons in a nearly stationary orbit?
Various moon orbits
Could a star closely orbit a black hole long enough for the star to have lost 0.5B+ years to time dilation?
Can there be a three-moon system where only two are visible most of the time? | CommonCrawl |
List of NP-complete problems
This is a list of some of the more commonly known problems that are NP-complete when expressed as decision problems. As there are hundreds of such problems known, this list is in no way comprehensive. Many problems of this type can be found in Garey & Johnson (1979).
Graphs and hypergraphs
Graphs occur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g. Facebook or LinkedIn).
• 1-planarity[1]
• 3-dimensional matching[2][3]: SP1
• Bandwidth problem[3]: GT40
• Bipartite dimension[3]: GT18
• Capacitated minimum spanning tree[3]: ND5
• Route inspection problem (also called Chinese postman problem) for mixed graphs (having both directed and undirected edges). The program is solvable in polynomial time if the graph has all undirected or all directed edges. Variants include the rural postman problem.[3]: ND25, ND27
• Clique cover problem[2][3]: GT17
• Clique problem[2][3]: GT19
• Complete coloring, a.k.a. achromatic number[3]: GT5
• Cycle rank
• Degree-constrained spanning tree[3]: ND1
• Domatic number[3]: GT3
• Dominating set, a.k.a. domination number[3]: GT2
NP-complete special cases include the edge dominating set problem, i.e., the dominating set problem in line graphs. NP-complete variants include the connected dominating set problem and the maximum leaf spanning tree problem.[3]: ND2
• Feedback vertex set[2][3]: GT7
• Feedback arc set[2][3]: GT8
• Graph coloring[2][3]: GT4
• Graph homomorphism problem[3]: GT52
• Graph partition into subgraphs of specific types (triangles, isomorphic subgraphs, Hamiltonian subgraphs, forests, perfect matchings) are known NP-complete. Partition into cliques is the same problem as coloring the complement of the given graph. A related problem is to find a partition that is optimal terms of the number of edges between parts.[3]: GT11, GT12, GT13, GT14, GT15, GT16, ND14
• Hamiltonian completion[3]: GT34
• Hamiltonian path problem, directed and undirected.[2][3]: GT37, GT38, GT39
• Graph intersection number[3]: GT59
• Longest path problem[3]: ND29
• Maximum bipartite subgraph or (especially with weighted edges) maximum cut.[2][3]: GT25, ND16
• Maximum common subgraph isomorphism problem[3]: GT49
• Maximum independent set[3]: GT20
• Maximum Induced path[3]: GT23
• Minimum maximal independent set a.k.a. minimum independent dominating set[4]
NP-complete special cases include the minimum maximal matching problem,[3]: GT10 which is essentially equal to the edge dominating set problem (see above).
• Metric dimension of a graph[3]: GT61
• Metric k-center
• Minimum degree spanning tree
• Minimum k-cut
• Minimum k-spanning tree
• Steiner tree, or Minimum spanning tree for a subset of the vertices of a graph.[2] (The minimum spanning tree for an entire graph is solvable in polynomial time.)
• Modularity maximization[5]
• Monochromatic triangle[3]: GT6
• Pathwidth,[6] or, equivalently, interval thickness, and vertex separation number[7]
• Rank coloring
• k-Chinese postman
• Shortest total path length spanning tree[3]: ND3
• Slope number two testing[8]
• Recognizing string graphs[9]
• Subgraph isomorphism problem[3]: GT48
• Treewidth[6]
• Testing whether a tree may be represented as Euclidean minimum spanning tree
• Vertex cover[2][3]: GT1
Mathematical programming
• 3-partition problem[3]: SP15
• Bin packing problem[3]: SR1
• Bottleneck traveling salesman[3]: ND24
• Uncapacitated facility location problem
• Flow Shop Scheduling Problem
• Generalized assignment problem
• Integer programming. The variant where variables are required to be 0 or 1, called zero-one linear programming, and several other variants are also NP-complete[2][3]: MP1
• Some problems related to Job-shop scheduling
• Knapsack problem, quadratic knapsack problem, and several variants[2][3]: MP9
• Some problems related to Multiprocessor scheduling
• Numerical 3-dimensional matching[3]: SP16
• Open-shop scheduling
• Partition problem[2][3]: SP12
• Quadratic assignment problem[3]: ND43
• Quadratic programming (NP-hard in some cases, P if convex)
• Subset sum problem[3]: SP13
• Variations on the Traveling salesman problem. The problem for graphs is NP-complete if the edge lengths are assumed integers. The problem for points on the plane is NP-complete with the discretized Euclidean metric and rectilinear metric. The problem is known to be NP-hard with the (non-discretized) Euclidean metric.[3]: ND22, ND23
• Vehicle routing problem
Formal languages and string processing
• Closest string[10]
• Longest common subsequence problem over multiple sequences[3]: SR10
• The bounded variant of the Post correspondence problem[3]: SR11
• Shortest common supersequence over multiple sequences[3]: SR8
• Extension of the string-to-string correction problem[11][3]: SR8
Games and puzzles
• Bag (Corral)[12]
• Battleship
• Bulls and Cows, marketed as Master Mind: certain optimisation problems but not the game itself.
• Edge-matching puzzles
• Fillomino[13]
• (Generalized) FreeCell[14]
• Goishi Hiroi
• Hashiwokakero[15]
• Heyawake[16]
• (Generalized) Instant Insanity[3]: GP15
• Kakuro (Cross Sums)[17]
• Kingdomino[18]
• Kuromasu (also known as Kurodoko)[19]
• LaserTank[20]
• Lemmings (with a polynomial time limit)[21]
• Light Up[22]
• Masyu[23]
• Minesweeper Consistency Problem[24] (but see Scott, Stege, & van Rooij[25])
• Grundy Number of a directed graph.[3]: GT56
• Nonograms
• Numberlink
• Nurikabe[26]
• (Generalized) Pandemic[27]
• Peg solitaire
• n-Queens completion
• Optimal solution for the N×N×N Rubik's Cube[28]
• SameGame
• (Generalized) Set[29]
• Shakashaka
• Slither Link on a variety of grids[30][31][32]
• (Generalized) Sudoku[30][33]
• Tatamibari
• Tentai Show
• Problems related to Tetris[34]
• Verbal arithmetic
Other
• Berth allocation problem[35]
• Betweenness
• Assembling an optimal Bitcoin block.[36]
• Boolean satisfiability problem (SAT).[2][3]: LO1 There are many variations that are also NP-complete. An important variant is where each clause has exactly three literals (3SAT), since it is used in the proof of many other NP-completeness results.[3]: p. 48
• Circuit satisfiability problem
• Conjunctive Boolean query[3]: SR31
• Cyclic ordering[37]
• Exact cover problem. Remains NP-complete for 3-sets. Solvable in polynomial time for 2-sets (this is a matching).[2][3]: SP2
• Finding the global minimum solution of a Hartree-Fock problem[38]
• Upward planarity testing[8]
• Hospitals-and-residents problem with couples
• Knot genus[39]
• Latin square completion (the problem of determining if a partially filled square can be completed)
• Maximum 2-satisfiability[3]: LO5
• Maximum volume submatrix – Problem of selecting the best conditioned subset of a larger $m\times n$ matrix. This class of problem is associated with Rank revealing QR factorizations and D optimal experimental design.[40]
• Minimal addition chains for sequences.[41] The complexity of minimal addition chains for individual numbers is unknown.[42]
• Modal logic S5-Satisfiability
• Pancake sorting distance problem for strings[43]
• Solubility of two-variable quadratic polynomials over the integers.[44] Given positive integers $\textstyle A,B,C\geq 0$, decide existence of positive integers $x,y$ such that $Ax^{2}+By-C=0$
• By the same article[44] existence of bounded modular square roots with arbitrarily composite modulus. Given positive integers $\textstyle A,B,C\geq 0$, decide existence of an integer $x\in [0,C]$ such that $x^{2}\equiv A{\bmod {B}}$. The problem remains NP-complete even if a prime factorization of $B$ is provided.
• Second order instantiation
• Serializability of database histories[3]: SR33
• Set cover (also called minimum cover problem) This is equivalent, by transposing the incidence matrix, to the hitting set problem.[2][3]: SP5, SP8
• Set packing[2][3]: SP3
• Set splitting problem[3]: SP4
• Scheduling to minimize weighted completion time
• Block Sorting[45] (Sorting by Block Moves)
• Sparse approximation
• Variations of the Steiner tree problem. Specifically, with the discretized Euclidean metric, rectilinear metric. The problem is known to be NP-hard with the (non-discretized) Euclidean metric.[3]: ND13
• Three-dimensional Ising model[46]
See also
• Existential theory of the reals#Complete problems
• Karp's 21 NP-complete problems
• List of PSPACE-complete problems
• Reduction (complexity)
Notes
1. Grigoriev & Bodlaender (2007).
2. Karp (1972)
3. Garey & Johnson (1979)
4. Minimum Independent Dominating Set
5. Brandes, Ulrik; Delling, Daniel; Gaertler, Marco; Görke, Robert; Hoefer, Martin; Nikoloski, Zoran; Wagner, Dorothea (2006), Maximizing Modularity is hard, arXiv:physics/0608255, Bibcode:2006physics...8255B
6. Arnborg, Corneil & Proskurowski (1987)
7. Kashiwabara & Fujisawa (1979); Ohtsuki et al. (1979); Lengauer (1981).
8. Garg, Ashim; Tamassia, Roberto (1995). "On the computational complexity of upward and rectilinear planarity testing". Lecture Notes in Computer Science. Vol. 894/1995. pp. 286–297. doi:10.1007/3-540-58950-3_384. ISBN 978-3-540-58950-1.
9. Schaefer, Marcus; Sedgwick, Eric; Štefankovič, Daniel (September 2003). "Recognizing string graphs in NP". Journal of Computer and System Sciences. 67 (2): 365–380. doi:10.1016/S0022-0000(03)00045-X.
10. Lanctot, J. Kevin; Li, Ming; Ma, Bin; Wang, Shaojiu; Zhang, Louxin (2003), "Distinguishing string selection problems", Information and Computation, 185 (1): 41–55, doi:10.1016/S0890-5401(03)00057-9, MR 1994748
11. Wagner, Robert A. (May 1975). "On the complexity of the Extended String-to-String Correction Problem". Proceedings of seventh annual ACM symposium on Theory of computing - STOC '75. pp. 218–223. doi:10.1145/800116.803771. ISBN 9781450374194. S2CID 18705107.
12. Friedman, Erich. "Corral Puzzles are NP-complete" (PDF). Retrieved 17 August 2021.
13. Yato, Takauki (2003). Complexity and Completeness of Finding Another Solution and its Application to Puzzles. CiteSeerX 10.1.1.103.8380.
14. Malte Helmert, Complexity results for standard benchmark domains in planning, Artificial Intelligence 143(2):219-262, 2003.
15. "HASHIWOKAKERO Is NP-Complete".
16. Holzer & Ruepp (2007)
17. Takahiro, Seta (5 February 2002). "The complexities of puzzles, cross sum and their another solution problems (ASP)" (PDF). Retrieved 18 November 2018.
18. Nguyen, Viet-Ha; Perrot, Kévin; Vallet, Mathieu (24 June 2020). "NP-completeness of the game KingdominoTM". Theoretical Computer Science. 822: 23–35. doi:10.1016/j.tcs.2020.04.007. ISSN 0304-3975. S2CID 218552723.
19. Kölker, Jonas (2012). "Kurodoko is NP-complete" (PDF). Journal of Information Processing. 20 (3): 694–706. doi:10.2197/ipsjjip.20.694. S2CID 46486962. Archived from the original (PDF) on 12 February 2020.
20. Alexandersson, Per; Restadh, Petter (2020). "LaserTank is NP-Complete". Mathematical Aspects of Computer and Information Sciences. Lecture Notes in Computer Science. Springer International Publishing. 11989: 333–338. arXiv:1908.05966. doi:10.1007/978-3-030-43120-4_26. ISBN 978-3-030-43119-8. S2CID 201058355.
21. Cormode, Graham (2004). The hardness of the lemmings game, or Oh no, more NP-completeness proofs (PDF).
22. Light Up is NP-Complete
23. Friedman, Erich (27 March 2012). "Pearl Puzzles are NP-complete". Archived from the original on 4 February 2012.
24. Kaye (2000)
25. Allan Scott, Ulrike Stege, Iris van Rooij, Minesweeper may not be NP-complete but is hard nonetheless, The Mathematical Intelligencer 33:4 (2011), pp. 5–17.
26. Holzer, Markus; Klein, Andreas; Kutrib, Martin (2004). "On The NP-Completeness of The NURIKABE Pencil Puzzle and Variants Thereof" (PDF). Proceedings of the 3rd International Conference on Fun with Algorithms. S2CID 16082806. Archived from the original (PDF) on 11 February 2020. {{cite journal}}: External link in |journal= (help)
27. Nakai, Kenichiro; Takenaga, Yasuhiko (2012). "NP-Completeness of Pandemic". Journal of Information Processing. 20 (3): 723–726. doi:10.2197/ipsjjip.20.723. ISSN 1882-6652.
28. Demaine, Erik; Eisenstat, Sarah; Rudoy, Mikhail (2018). Solving the Rubik's Cube Optimally is NP-complete. 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). doi:10.4230/LIPIcs.STACS.2018.24.
29. Chaudhuri, Kamalika; Godfrey, Brighten; Ratajczak, David; Wee, Hoeteck (2003). "On the Complexity of the Game of Set" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
30. Sato, Takayuki; Seta, Takahiro (1987). Complexity and Completeness of Finding Another Solution and Its Application to Puzzles (PDF). International Symposium on Algorithms (SIGAL 1987).
31. Nukui; Uejima (March 2007). "ASP-Completeness of the Slither Link Puzzle on Several Grids". Ipsj Sig Notes. 2007 (23): 129–136.
32. Kölker, Jonas (2012). "Selected Slither Link Variants are NP-complete". Journal of Information Processing. 20 (3): 709–712. doi:10.2197/ipsjjip.20.709.
33. A SURVEY OF NP-COMPLETE PUZZLES, Section 23; Graham Kendall, Andrew Parkes, Kristian Spoerer; March 2008. (icga2008.pdf)
34. Demaine, Eric D.; Hohenberger, Susan; Liben-Nowell, David (25–28 July 2003). Tetris is Hard, Even to Approximate (PDF). Proceedings of the 9th International Computing and Combinatorics Conference (COCOON 2003). Big Sky, Montana.
35. Lim, Andrew (1998), "The berth planning problem", Operations Research Letters, 22 (2–3): 105–110, doi:10.1016/S0167-6377(98)00010-8, MR 1653377
36. J. Bonneau, "Bitcoin mining is NP-hard"
37. Galil, Zvi; Megiddo, Nimrod (October 1977). "Cyclic ordering is NP-complete". Theoretical Computer Science. 5 (2): 179–182. doi:10.1016/0304-3975(77)90005-6.
38. Whitfield, James Daniel; Love, Peter John; Aspuru-Guzik, Alán (2013). "Computational complexity in electronic structure". Phys. Chem. Chem. Phys. 15 (2): 397–411. arXiv:1208.3334. Bibcode:2013PCCP...15..397W. doi:10.1039/C2CP42695A. PMID 23172634. S2CID 12351374.
39. Agol, Ian; Hass, Joel; Thurston, William (19 May 2002). "3-manifold knot genus is NP-complete". Proceedings of the thiry-fourth annual ACM symposium on Theory of computing. STOC '02. New York, NY, USA: Association for Computing Machinery. pp. 761–766. arXiv:math/0205057. doi:10.1145/509907.510016. ISBN 978-1-58113-495-7. S2CID 10401375.
40. "Archived copy" (PDF). www.meliksah.edu.tr. Archived from the original (PDF) on 3 February 2015. Retrieved 12 January 2022.{{cite web}}: CS1 maint: archived copy as title (link)
41. Peter Downey, Benton Leong, and Ravi Sethi. "Computing Sequences with Addition Chains" SIAM J. Comput., 10(3), 638–646, 1981
42. D. J. Bernstein, "Pippinger's exponentiation algorithm" (draft)
43. Hurkens, C.; Iersel, L. V.; Keijsper, J.; Kelk, S.; Stougie, L.; Tromp, J. (2007). "Prefix reversals on binary and ternary strings". SIAM J. Discrete Math. 21 (3): 592–611. arXiv:math/0602456. doi:10.1137/060664252.
44. Manders, Kenneth; Adleman, Leonard (1976). "NP-complete decision problems for quadratic polynomials". Proceedings of the eighth annual ACM symposium on Theory of computing - STOC '76. pp. 23–29. doi:10.1145/800113.803627. ISBN 9781450374149. S2CID 18885088.
45. Bein, W. W.; Larmore, L. L.; Latifi, S.; Sudborough, I. H. (1 January 2002). "Block sorting is hard". Proceedings International Symposium on Parallel Architectures, Algorithms and Networks. I-SPAN'02. pp. 307–312. doi:10.1109/ISPAN.2002.1004305. ISBN 978-0-7695-1579-3. S2CID 32222403.
46. Barry Arthur Cipra, "The Ising Model Is NP-Complete", SIAM News, Vol 33, No 6.
References
General
• Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5. This book is a classic, developing the theory, then cataloguing many NP-Complete problems.
• Cook, S.A. (1971). "The complexity of theorem proving procedures". Proceedings, Third Annual ACM Symposium on the Theory of Computing, ACM, New York. pp. 151–158. doi:10.1145/800157.805047.
• Karp, Richard M. (1972). "Reducibility among combinatorial problems". In Miller, Raymond E.; Thatcher, James W. (eds.). Complexity of Computer Computations. Plenum. pp. 85–103.
• Dunne, P.E. "An annotated list of selected NP-complete problems". COMP202, Dept. of Computer Science, University of Liverpool. Retrieved 21 June 2008.
• Crescenzi, P.; Kann, V.; Halldórsson, M.; Karpinski, M.; Woeginger, G. "A compendium of NP optimization problems". KTH NADA, Stockholm. Retrieved 21 June 2008.
• Dahlke, K. "NP-complete problems". Math Reference Project. Retrieved 21 June 2008.
Specific problems
• Friedman, E (2002). "Pearl puzzles are NP-complete". Stetson University, DeLand, Florida. Retrieved 21 June 2008.
• Grigoriev, A; Bodlaender, H L (2007). "Algorithms for graphs embeddable with few crossings per edge". Algorithmica. 49 (1): 1–11. CiteSeerX 10.1.1.61.3576. doi:10.1007/s00453-007-0010-x. MR 2344391. S2CID 8174422.
• Hartung, S; Nichterlein, A (2012). How the World Computes. Lecture Notes in Computer Science. Vol. 7318. Springer, Berlin, Heidelberg. pp. 283–292. CiteSeerX 10.1.1.377.2077. doi:10.1007/978-3-642-30870-3_29. ISBN 978-3-642-30869-7. S2CID 6112925.
• Holzer, Markus; Ruepp, Oliver (2007). "The Troubles of Interior Design–A Complexity Analysis of the Game Heyawake" (PDF). Proceedings, 4th International Conference on Fun with Algorithms, LNCS 4475. Springer, Berlin/Heidelberg. pp. 198–212. doi:10.1007/978-3-540-72914-3_18. ISBN 978-3-540-72913-6.
• Kaye, Richard (2000). "Minesweeper is NP-complete". Mathematical Intelligencer. 22 (2): 9–15. doi:10.1007/BF03025367. S2CID 122435790. Further information available online at Richard Kaye's Minesweeper pages.
• Kashiwabara, T.; Fujisawa, T. (1979). "NP-completeness of the problem of finding a minimum-clique-number interval graph containing a given graph as a subgraph". Proceedings. International Symposium on Circuits and Systems. pp. 657–660.
• Ohtsuki, Tatsuo; Mori, Hajimu; Kuh, Ernest S.; Kashiwabara, Toshinobu; Fujisawa, Toshio (1979). "One-dimensional logic gate assignment and interval graphs". IEEE Transactions on Circuits and Systems. 26 (9): 675–684. doi:10.1109/TCS.1979.1084695.
• Lengauer, Thomas (1981). "Black-white pebbles and graph separation". Acta Informatica. 16 (4): 465–475. doi:10.1007/BF00264496. S2CID 19415148.
• Arnborg, Stefan; Corneil, Derek G.; Proskurowski, Andrzej (1987). "Complexity of finding embeddings in a k-tree". SIAM Journal on Algebraic and Discrete Methods. 8 (2): 277–284. doi:10.1137/0608024.
• Cormode, Graham (2004). "The hardness of the lemmings game, or Oh no, more NP-completeness proofs". Proceedings of Third International Conference on Fun with Algorithms (FUN 2004). pp. 65–76.
External links
• A compendium of NP optimization problems
• Graph of NP-complete Problems
Authority control: National
• Israel
• United States
| Wikipedia |
\begin{document}
\newcounter{num}
\setcounter{num}{0}
\newcommand\titleParagraph[1] {\ifodd\value{num}{\textit{#1}}\fi }
\newcommand{\jg}[1]{\textcolor{purple}{#1}}
\newcommand{\fm}[1]{{#1}} \newcommand{\mm}[1]{{#1}} \newcommand{\cb}[1]{\textcolor{green}{#1}}
\newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathfrak c}{\mathfrak c} \newcommand{\mathfrak{su}}{\mathfrak{su}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\hat{a}}{\hat{a}} \newcommand{\hat{b}}{\hat{b}} \newcommand{\hat{c}}{\hat{c}} \newcommand{\hat{d}}{\hat{d}} \newcommand{\mathcal G}{\mathcal G} \newcommand{\mathcal Z}{\mathcal Z} \newcommand{\ket{\mathcal{C}_\alpha^+}}{\ket{\mathcal{C}_\alpha^+}} \newcommand{\ket{\mathcal{C}_\alpha^-}}{\ket{\mathcal{C}_\alpha^-}} \newcommand{\bra{\mathcal{C}_\alpha^+}}{\bra{\mathcal{C}_\alpha^+}} \newcommand{\bra{\mathcal{C}_\alpha^-}}{\bra{\mathcal{C}_\alpha^-}} \newcommand{\mathcal{C}_1}{\mathcal{C}_1} \newcommand{\mathcal{C}_2}{\mathcal{C}_2} \newcommand{2\text{ph}}{2\text{ph}} \newcommand{1\text{ph}}{1\text{ph}}
\newcommand{\bra}[1]{\ensuremath{\langle#1|}}
\newcommand{\ket}[1]{\ensuremath{|#1\rangle}}
\newcommand{\ketbra}[2]{\ensuremath{|#1\rangle \langle #2|}}
\begin{abstract} \mm{Bosonic cat qubits stabilized by two-photon driven dissipation benefit from exponential suppression of bit-flip errors and an extensive set of gates preserving this protection. These properties make them promising building blocks of a hardware-efficient and fault-tolerant quantum processor. In this paper, we propose a performance optimization of the repetition cat code architecture using fast but noisy CNOT gates for stabilizer measurements. This optimization leads to high thresholds for the physical figure of merit, given as the ratio between intrinsic single-photon loss rate of the bosonic mode and the engineered two-photon loss rate, as well as an improved scaling below threshold of the required overhead, to reach an expected level of logical error rate. Relying on the specific error models for cat qubit operations, this optimization exploits fast parity measurements, using accelerated low-fidelity CNOT gates, combined with fast ancilla parity-check qubits. The significant enhancement in the performance is explained by: 1- the highly asymmetric error model of cat qubit CNOT gates with a major component on control (ancilla) qubits, and 2- the robustness of the repetition cat code error correction performance in presence of the leakage induced by fast operations. In order to demonstrate these performances, we develop a method to sample the repetition code under circuit-level noise that also takes into account cat qubit state leakage.}
\end{abstract}
\newcommand{High-performance repetition cat code using fast noisy operations}{High-performance repetition cat code using fast noisy operations}
\title{High-performance repetition cat code using fast noisy operations}
\begin{comment}
\author[1, 2]{Francois-Marie Le R\'egent} \orcid{0000-0002-5229-7155}
\author[2]{Camille Berdou}
\author[2]{Zaki Leghtas}
\author[1]{J\'er\'emie Guillaud} \orcid{0000-0001-6507-9344}
\author[2]{Mazyar Mirrahimi} \orcid{0000-0001-9471-6031}
\affil[1]{Alice\&Bob, 53 boulevard du Général Martial Valin, 75015 Paris}
\affil[2]{Laboratoire de Physique de l'École Normale Supérieure, École Normale Supérieure, Centre Automatique et Systèmes, Mines Paris, Université PSL, Sorbonne Université, CNRS, Inria, 75005 Paris} \end{comment}
\author{Francois-Marie Le R\'egent}
\affiliation{Alice\&Bob, 53 boulevard du Général Martial Valin, 75015 Paris} \affiliation{Laboratoire de Physique de l'Ecole Normale Supérieure, Ecole normale supérieure, MINES Paris, Université PSL, Sorbonne Université, CNRS, Inria, 75005 Paris}
\author{Camille Berdou} \affiliation{Laboratoire de Physique de l'Ecole Normale Supérieure, Ecole normale supérieure, MINES Paris, Université PSL, Sorbonne Université, CNRS, Inria, 75005 Paris}
\author{Zaki Leghtas} \affiliation{Laboratoire de Physique de l'Ecole Normale Supérieure, Ecole normale supérieure, MINES Paris, Université PSL, Sorbonne Université, CNRS, Inria, 75005 Paris}
\author{J\'er\'emie Guillaud}
\affiliation{Alice\&Bob, 53 boulevard du Général Martial Valin, 75015 Paris}
\author{Mazyar Mirrahimi}
\affiliation{Laboratoire de Physique de l'Ecole Normale Supérieure, Ecole normale supérieure, MINES Paris, Université PSL, Sorbonne Université, CNRS, Inria, 75005 Paris}
\maketitle
\section{Introduction}
\mm{Bosonic encoding of quantum information is} expected to lower the number of physical components required to perform quantum computations at scale~\cite{Joshi2021, Cai2021}. The crux of bosonic architectures is to leverage the infinite dimensional Hilbert space of a quantum harmonic oscillator (QHO) to implement some of the redundancy required for quantum error correction in a single physical component, an approach that has been coined ``hardware-efficient''~\cite{Mirrahimi2014}. Although these architectures are theoretically promising, operating such a concatenated ``bosonic code + discrete variable (DV) code'' below the threshold of the DV code is still experimentally challenging for current state-of-the-art superconducting platforms, thereby motivating subsequent research to improve the theoretical performance of these proposals.
In this work, we focus on the ``cat qubit + repetition code'' architecture~\cite{Guillaud_2019, Chamberland2022} with the objective of optimizing its error correcting capability. \mm{In this approach, the state of a QHO is confined through an engineered two-photon driven dissipative process to a two-dimensional subspace spanned by two coherent states $\ket{\pm\alpha}$, or equivalently by the coherent superpositions of those two states, the Schr\"odinger cat states: \begin{equation} \ket{\mathcal{C}_\alpha^\pm} := \mathcal{N}_\pm (\ket{\alpha} \pm \ket{-\alpha}) \label{eq:encoding} \end{equation}
where $\mathcal{N}_\pm = (2(1\pm \exp(-2 |\alpha|^2)))^{-1/2} $ are normalizing constants. The computational states of this so-called cat qubit are given by \begin{align*} \ket{0}_C &= (\ket{\mathcal{C}_\alpha^+} +
\ket{\mathcal{C}_\alpha^-})/\sqrt{2} = \ket{\alpha} +
\mathcal{O}(e^{-2|\alpha|^2}) \\ \ket{1}_C &= (\ket{\mathcal{C}_\alpha^+} -
\ket{\mathcal{C}_\alpha^-})/\sqrt{2} = \ket{-\alpha} +
\mathcal{O}(e^{-2|\alpha|^2}). \end{align*}
The engineered two-photon driven dissipative process can be effectively modelled by a Lindblad term of the form $L_{2\text{ph}}=\sqrt{\kappa_2}(\hat a^2-\alpha^2)$ (we refer to~\cite{Mirrahimi2014} for further details on how such a process can be engineered in a superconducting platform and how it stabilizes the cat qubit manifold). This engineered process will be in competition with other coherent and incoherent processes tending to leak the QHO out of the cat qubit manifold or causing a drift in the manifold. Among these processes, the dominant one is the undesired single-photon loss, modelled by a Lindblad term of the form $L_{1\text{ph}}=\sqrt{\kappa_1}\hat a$.
}
\mm{The cat qubit stabilized by the two-photon driven dissipative process benefits from an intrinsic protection against bit-flip errors where the rate of such errors is exponentially suppressed with the mean number of photons $|\alpha|^2$~\cite{Lescanne2020,Berdou2022}. Relying on this protection, and the fact that it is possible to perform an extensive set of quantum operations preserving such an error bias, two of us proposed in~\cite{Guillaud_2019} to concatenate such an encoding with a repetition code to conceive a fault-tolerant architecture with a universal set of logical gates. The elementary figure of merit that quantifies the performance of this architecture is given by $\eta=\kappa_1/\kappa_2$, where $\kappa_1$ is the rate of undesired single-photon loss, and $\kappa_2$ corresponds to the rate of the engineered two-photon loss mechanism, stabilizing the cat qubit. More precisely,} the performance of this architecture is quantified both in terms of \mm{a threshold for this figure of merit $\eta$ (called $\eta_{\text{th}}$ in this paper) below which the concatenation with the repetition code leads to an exponential suppression of phase-flip errors, and in terms of the physical resources required to operate the architecture at a given target error rate $\epsilon_{\mathrm{L}}$ for a fixed value of $\eta<\eta_{\text{th}}$.}
\mm{In order to optimize the operation of this architecture, we investigate the acceleration of CNOT gates involved in stabilizer measurements. While the driven dissipative implementation of CNOT gate for cat qubits, as detailed in~\cite{Guillaud_2019, Chamberland2022} and re-called in the next section, ensures the preservation of exponential bit-flip suppression, its acceleration can lead to significant increase in phase-flip error probability, decreasing the gate fidelity, and also to some leakage out of the cat qubit subspace. Adding appropriate ``refreshing steps'' in the error correction logical circuit, countering the leakage of the cat qubit, we show that, despite the degraded gate fidelity, the performance of error correction is significantly enhanced due to a faster measurement cycle. This enhancement is explained by two facts: first, the phase-flip error probability on the control (measurement) and target (data) cat qubits are highly asymmetric with the major contribution on the measurement qubits, and second, the residual leakage merely leads to short-range (both in time and space) measurement error correlations that affect marginally the performance of error correction. Through careful Monte Carlo simulations, with a circuit-level error model taking into account the impact of leakage, we show that operating the code in this fast gate regime achieves close-to-optimal performance, and that an application-wise relevant error probability of $\epsilon_{\mathrm{L}} = 10^{-10}$ per error correction cycle may be achieved with a repetition code of distance $25$ under a realistic hardware assumption $\eta = 10^{-3}$.}
The paper is structured as follows. Section~\ref{sec:Motivation} summarizes the common principle exploited throughout this work. More specifically, we show how ``faster yet noisier'' gates are desirable in the context of error correction because of the high resilience of stabilizer codes to measurement errors (compared to errors damaging the encoded data); but lead in the context of dissipative cat qubits to important state leakage that needs to be carefully addressed. In Section~\ref{sec:fastgates}, \mm{we thoroughly analyze the impact of simple acceleration of CNOT gates on the error correction performance}. In Section~\ref{sec:Asymmetry}, \mm{we further investigate the idea of asymmetry in the measurement and data cat qubit errors, by separating the time scales associated to their dynamics. More precisely, the idea that we exploit here is to use fast measurement cat qubits with large single-photon and two-photon dissipation rates $\kappa_1$ and $\kappa_2$, and slow data cat qubits with smaller rates $\kappa_1$ and $\kappa_2$, but a similar ratio $\eta=\kappa_1/\kappa_2$. }
\section{\mm{Repetition cat qubit, error model and leakage}} \label{sec:Motivation}
\mm{Similarly to the case of surface code with conventional qubits (e.g. transmons)~\cite{Fowler2012}, the error-correction performance of the ``cat qubit+repetition code'' architecture is mainly determined via the error probabilities of the CNOT gates involved in stabilizer measurements. In~\cite{Guillaud_2019}, inspired by~\cite{Puri2020}, two of us proposed an adiabatic implementation of the CNOT gate for the cat qubits stabilized by two-photon dissipation. While this implementation preserves the exponential suppression of bit-flip errors, the phase-flip errors can occur both due to intrinsic loss mechanism of the QHO and also due to higher order corrections to the adiabatic process. More precisely, on the one hand, the implementation needs to be slow enough with respect to the two-photon dissipation rate $\kappa_2$, so that the non-adiabatic effects do not induce significant phase-flip errors. On the other hand, this slow implementation leads to further phase-flip errors due to intrinsic single-photon loss of the QHO at rate $\kappa_1$. This leads to stringent requirements for the figure of merit $\eta=\kappa_1/\kappa_2$ to ensure a high-fidelity operation. }
\mm{Throughout the past few years a certain number of proposals have targeted this issue~\cite{Xu2021,Putterman2022,Gautier2022,Ruiz2022,Xu2022Squeezed}. By various alterations of the dissipative process or the addition of the Hamiltonian confinements, these references aim at accelerating the operation of the bias-preserving gates, thus improving their fidelity. These modifications however usually come at the expense of more complex implementations, and sometimes also at the expense of losing some of the protection against bit-flip errors. As mentioned in the introduction, here we consider the possibility of using a low-fidelity operation by accelerating the operations for the original implementation~\cite{Guillaud_2019} and rather examine the impact of fast low-fidelity gates at the logical level~\cite{Xu2021}. In this section, we start by reminding the proposal of~\cite{Guillaud_2019} for realizing the CNOT gate and the associated error models. We next discuss the expected performance of error correction at the repetition code level. Finally, we also provide details on how the leakage induced by the finite gate time could limit this performance. }
\subsection{\mm{CNOT gate and error model}\label{ssec:CNOTerrormodel}}
\mm{In this paper, we denote the control cat qubit of the CNOT gate by the index $a$ (standing for the ancilla qubits for the repetition code stabilizer measurements) and the target one by the index $d$ (standing for the data qubits).} As proposed in~\cite{Guillaud_2019}, the CNOT gate can be implemented with a time-varying dissipative mechanism modelled by the master equation \begin{equation}
\frac{d\hat \rho}{dt}=\kappa_2\mathcal{D}[\hat L_a]\hat\rho+\kappa_2\mathcal{D}[\hat L_d(t)]\hat\rho-i[\hat H,\hat\rho]. \end{equation} Here $\mathcal{D}[\hat L](\hat\rho)=\hat{L} \hat{\rho} \hat{L}^{\dagger}-\frac{1}{2}\left(\hat{L}^{\dagger} \hat{L} \hat{\rho}+\hat{\rho} \hat{L}^{\dagger} \hat{L}\right)$, and $\hat L_a=\hat a^2-\alpha^2$ corresponds to regular two-photon driven dissipation for the ancilla mode $\hat a$ pinning its state to the manifold of cat states Span$\{\ket{\mathcal{C}_\alpha^\pm}\}$, and $$ \hat L_d(t)=\hat{d}^{2}-\alpha^{2}+\frac{\alpha}{2}\left(e^{2 i \frac{\pi}{T} t}-1\right)\left(\hat{a}-\alpha\right) $$ with $T$ the duration of CNOT gate, ensures a $\pi$-rotation of the target data mode $\hat d$ conditioned on the state of ancilla being in $\ket{-\alpha}$. The last term $$ \hat{H} =\frac{\pi}{4 \alpha T}\left(\hat a+\hat{a}^{\dagger}-2 \alpha\right)\left(\hat{d}^{\dagger} \hat{d}-\alpha^{2}\right) $$ corresponds to a feed-forward Hamiltonian added to reduce the non-adiabatic errors induced by finite gate time.
The error model for such an implementation of the CNOT gate is detailed in~\cite{Guillaud_2019,Chamberland2022} and is briefly recalled here. This error model takes into account the non-adiabatic effects due to the finite gate time, as well as the errors induced by the undesired single photon decay of the ancilla and data modes. This undesired decay is modelled by the additional Lindbladian super-operators $\kappa_1\mathcal{D}[\hat a]$ and $\kappa_1\mathcal{D}[\hat d]$. As discussed in~\cite{Guillaud_2019, Chamberland2022}, the addition of other noise mechanisms such as thermal excitations or photon dephasing has little impact on these error models.
While the bit-flip type errors remain exponentially suppressed as $\exp(-2|\alpha|^2)$, the probability of the phase-flip errors are given by
\begin{equation} p_{\mathrm{Z_a}} = |\alpha|^{2} \kappa_1 T + 0.159\frac{1}{
|\alpha|^2 \kappa_{2} T} \\ \label{eq:error_model_symZ1} \end{equation}
\begin{equation} p_{\mathrm{Z_d}} =p_{\mathrm{Z_a Z_d}} =\frac{1}{2}
|\alpha|^{2} \kappa_1 T. \label{eq:error_model_symZ1Z2} \end{equation}
On the one hand, the probability of ancilla phase-flip errors $p_{\mathrm{Z_a}}$ comprises two parts. The first term corresponds to the errors induced by single photon loss and is proportional to the mean photon number of the cat state $|\alpha|^2$ and the gate duration $T$, and the second term to the errors induced by non-adiabatic effects. As analyzed in~\cite{Chamberland2022}, the probability of these errors scales inversely with $|\alpha|^2$ and $T$. The proportionality coefficient $0.159$ is obtained via a numerical fit, close to the estimated analytical value of $\pi^2/64$ derived in~\cite{Chamberland2022}. On the other hand, the data phase-flip errors $\mathrm{Z_d}$, as well as simultaneous data and ancilla errors $\mathrm{Z_a Z_d}$, are only induced by the single photon loss and their probability is therefore simply proportional to $|\alpha|^2T$.
The gate time $T$ that minimizes the total phase-flip error probability of the CNOT gate $p_{\mathrm{CNOT}} =
p_{\mathrm{Z_a}} + p_{\mathrm{Z_d}} + p_{\mathrm{Z_aZ_d}} $ is $T^{\star}=0.282/|\alpha|^2\sqrt{\kappa_1\kappa_2}$, which corresponds to a CNOT error probability \begin{equation} p_{\mathrm{CNOT}}^{*}= 1.13\sqrt{\frac{\kappa_1}{\kappa_2}}. \label{eq:pCNOT_opti} \end{equation}
\subsection{Expected performance of error correction~\label{ssec:QEC_Perf}}
Here we study the CNOT gate from the perspective of its application in error syndrome measurements for phase-flip error correction. In this aim, the references~\cite{Guillaud_2021,Chamberland2022} perform Monte-Carlo simulations of the error correction logical circuit. These simulations are performed with a circuit-level error model in which all operations (gates, state preparations and measurements, and idling times) are noisy.
More precisely, the CNOT gate errors are given by Eqs. \ref{eq:error_model_symZ1} and \ref{eq:error_model_symZ1Z2} with $T=T^{\star}$. Furthermore, assuming that the ancilla preparation and measurement can be achieved in the same time $T^{\star}$, each ancilla preparation is accompanied by a phase-flip error probability of $|\alpha|^2\kappa_1T^{\star}=0.282\sqrt{\kappa_1/\kappa_2}$ and similarly, each ancilla measurement is faulty with probability $0.282\sqrt{\kappa_1/\kappa_2}$. Finally, the idle time during ancilla measurement or preparation is accompanied by a phase-flip error probability of $0.282\sqrt{\kappa_1/\kappa_2}$ in data qubits. The simulation results are summarized in Figs.~\ref{fig:PRA_fitted} and~\ref{fig:PRA_overhead}.
Denoting by $\eta := \kappa_1/\kappa_2$ the figure of merit for stabilized cat qubits, we roughly expect the scaling of the logical error probability to be $p_{\mathrm{Z_L}}\propto (\eta/\eta_{\text{th}})^{d/4}$, where $\eta_{\text{th}}$ refers to the fault-tolerance threshold and $d$ is the code distance. The power of $d/4$, instead of $d/2$, is explained by the fact that the physical error probabilities scales with $\sqrt{\eta}$. This expectation is confirmed by fitting the numerical results of Fig.~\ref{fig:PRA_fitted} to the ansatz \begin{equation} p_{\mathrm{Z_{L}}}=ad\left(\frac{\eta}{\eta_{\text{th}}}\right)^{cd} \label{eq:fitZ_PRA} \end{equation} where we obtain the prefactor $a=7.7 \times 10^{-2}$, the exponential scaling $c=.258$ and the phase-flip threshold $\eta_{\text{th}} =7.61 \times 10^{-3}$. These numerical results are obtained by Monte Carlo simulations of the physical phase-flip errors and their propagation in the circuit followed by a minimum-weight perfect matching (MWPM) decoder~\cite{Fowler_PRL_2012, pymatching}.
Here, we discuss these expected error correction performances in two limits. First, in the limit of $\eta\rightarrow \eta_{\text{th}}\approx 7.61\times 10^{-3}$, we note that the operation times $T^{\star}\rightarrow 3.23/|\alpha|^2\kappa_2$. This means that various gates are performed in times of order $1/\kappa_2$ or much shorter. As it will become clear in the next subsection, such short gate times lead to non-negligible leakage out of the code space that could lead to new challenges such as time-dependent and correlated error models. In order to overcome this problem, in the next subsection, we propose to add a qubit refreshment process acting as a leakage reduction unit (LRU). This however comes at the expense of a deterioration of the threshold as it increases the total duration of the QEC cycle. Next, we note that in the limit of $\eta\rightarrow 0$, the operation time $T^{\star}$, scaling as $1/|\alpha|^2\kappa_2\sqrt{\eta}$, becomes long with respect to the typical entropy evacuation time of $1/\kappa_2$. This is mainly to ensure a balanced reduction of error probability between data and ancilla qubits. In Section~\ref{sec:fastgates}, we argue that relaxing this requirement of balanced error probability reduction can lead to significantly better error correction performance.
\subsection{Leakage and qubit refreshment step \label{ssec:LRU}}
The finite duration of the CNOT gates in the error correction circuit also leads to significant leakage out of the code space. \mm{Note that, contrary to the case of conventional qubits, and due to the continuous variable nature of encoding in cat qubits, the logical operations such as the CNOT gate perform rather well even in presence of leakage.} The main issue is with a coherent build-up of the leakage leading to different error models for the operations in the logical circuit from one step to the other. More importantly, such a leakage could also lead to correlated errors in time and space that could drastically limit the performance of the error correction.
This leakage out of the code space is quantified by the mean value of the projector $\hat{\mathrm{P}}^{\perp} = \hat{\mathrm{I}} - \hat{\mathrm{P}}$ with $\hat{\mathrm{P}} = \ketbra{\mathcal{C}_\alpha^+}{\mathcal{C}_\alpha^+} +
\ketbra{\mathcal{C}_\alpha^-}{\mathcal{C}_\alpha^-}$ and $\hat{\mathrm{I}}$ is the identity. For instance, the optimal gate time $T^{\star}$ for $\eta=10^{-3}$ and $|\alpha|^2=8$ is close to $1/\kappa_2$. As it can be seen in the simulations of Fig.~\ref{fig:leakage_cnot_reconvergence}, this leads to a leakage out of the code space as large as $7.1 \times 10^{-3}$.
A simple solution to handle leakage in the context of dissipative cat qubits consists in refocusing the cat qubit in the code space by letting it evolve under the action of two-photon driven dissipation. More precisely, each CNOT gate is followed by a qubit refreshing time during which the driven two-photon dissipation refocuses the leaked state to the code space. This simple process can be compared to more invasive LRUs considered for instance in the context of transmon qubits~\cite{Aliferis2007, Battistel2021, McEwen2021} that convert leakage into Pauli errors. One typical solution, implemented recently in an experiment~\cite{McEwen2021}, consists in adiabatically sweeping the qubit frequencies past a lossy resonator to swap excitations and go back to the $(\ket{g}, \ket{e})$ manifold in every round of the QEC circuit ~\cite{Chen2021}.
In Fig.~\ref{fig:leakage_vs_bitflip_changing_idle}, we compare the leakage rate after this qubit refreshing time with the bit-flip probability for different values of the cavity population $|\alpha|^2$. In these simulations, we consider a similar CNOT gate time and subsequent qubit refreshing time of $T$. By varying this duration $T$, we note that for $T\gtrsim 1/\kappa_2$, the leakage rate post refreshing time is below the bit-flip error probability and hence can be safely neglected. Therefore, we consider the duration $T=1/\kappa_2$ to be a lower bound on the CNOT gate time one can use in the QEC circuit without introducing spurious effects due to leakage. This means that the logical circuit simulations of Fig.~\ref{fig:PRA_fitted} and the overhead estimation of Fig.~\ref{fig:PRA_overhead} from \cite{Guillaud_2021} need to be revised close to the threshold value for $\eta$, for which the gate duration $T^{\star}$ becomes too short. Note that this lower bound could be reduced as $|\alpha|^2$ increases but we choose a conservative approach independent of the photon number.
\titleParagraph{Correction to PRA} \mm{Looking again at Figs.~\ref{fig:PRA_fitted} and~\ref{fig:PRA_overhead}, we need to consider the impact of adding these refreshing steps, therefore dealing with longer correction cycles. More precisely, we need to add a refreshing step of length $1/\kappa_2$ between the two CNOTs in one error correction cycle to avoid propagation and creation of correlated errors. Monte Carlo simulations indicate marginal change in the overhead estimates with respect to Fig.~\ref{fig:PRA_overhead} for $\eta<10^{-3}$. But for $\eta$ larger than $10^{-3}$ the expected overhead can increase significantly. This can also be understood by looking at the color map on the curves of Fig.~\ref{fig:PRA_overhead}. For $\eta<10^{-3}$ the gate duration is larger than $1/\kappa_2$ which allows neglect the additional refreshing step. }
\section{\mm{Accelerating QEC cycle with fast CNOTs} \label{sec:fastgates}}
In the previous section, we noted that in the limit of $\eta\rightarrow 0$, a CNOT operation time $T^{\star}$ ensuring its optimal fidelity becomes very long compared to the entropy evacuation time of $1/\kappa_2$. We also argued that this long operation time mainly secures a balanced reduction of error probability between data and ancilla qubits. The idea that we pursue in this section is that QEC is much more resilient to ancilla errors than data ones. More precisely, the QEC tolerates finite measurement errors induced by ancilla phase-flips at the expense of a slightly degraded error threshold~\cite{Dennis2002}. This fact, further clarified in Subsection~\ref{ssec:pheno}, motivates the choice of accelerating the CNOT gates to a minimal gate time of $1/\kappa_2$ equivalent to the time needed for leakage removal. Relying on the asymmetric error model of the CNOT gate, discussed in Subsection~\ref{ssec:CNOTerrormodel}, such a reduction of the gate time leads to a reduced phase-flip error probability of data qubits due to single photon loss, at the expense of increasing ancilla qubits phase-flip error probability induced by non-adiabatic effects. In Subsection~\ref{ssec:overhead_reducedtime}, we show that this faster cycle time drastically improves the error correction performance scaling with the figure of merit $\eta$.
\subsection{Resilience of QEC to ancilla errors\label{ssec:pheno}}
Following the discussion of Subsection~\ref{ssec:CNOTerrormodel}, with the choice of $T^{\star}$ as the CNOT gate time, the error probability for the ancilla and data qubits (neglecting the correlation for the simultaneous data and ancilla errors) are given by $$ p_{\mathrm{Z_a}}=0.987\sqrt{\eta},\qquad p_{\mathrm{Z_d}}=0.282\sqrt{\eta}. $$ Taking into account the ancilla preparation and detection errors, as well as the idle time data errors, this gives rise to a phenomenological error model~\cite{Dennis2002} with data error probability per cycle given by $p=1.128\sqrt{\eta}$ and measurement error probability given by $q=2.538\sqrt{\eta}$. In particular, in the limit where $\eta\rightarrow 0$, both these error probabilities also tend to zero with $\sqrt{\eta}$. This explains the threshold curves provided in Fig.~\ref{fig:PRA_fitted} where $p_{\mathrm{Z_L}}$ scales with $\eta^{d/4}$ in the limit of small $\eta$.
As explained earlier, the idea that we pursue in this section is to reduce the operation times to $T=1/\kappa_2$ instead of $T^{\star}$. Following once again the discussion of Subsection~\ref{ssec:CNOTerrormodel}, the ancilla and data qubit error probabilities are now given by $$
p_{\mathrm{Z_a}}=0.159\frac{1}{|\alpha|^2}+1.5|\alpha|^2\eta,\qquad p_{\mathrm{Z_d}}=|\alpha|^2\eta. $$
We furthermore assume that the ancilla preparation and measurement, as well as data idle time error probabilities are given by $|\alpha|^2\kappa_1 T=|\alpha|^2\eta$. In order to avoid leakage induced errors, we further consider a qubit refreshing time (LRU) of $1/\kappa_2$ between two CNOT operations in one cycle (see the inset of Fig.~\ref{fig:QEC_sym_nbar8_fitted}). During this qubit refreshing time, we need to consider an additional error probability of $|\alpha|^2\eta$ for both ancilla and data qubits. Now, for any value of the mean photon number $|\alpha|^2$, as $\eta$ goes to zero, the data error probability $p=5|\alpha|^2\eta$ tends to zero proportionally to $\eta$ (to be compared to $\sqrt{\eta}$ in the previous case), but the ancilla error probability $q=0.318/|\alpha|^2+6|\alpha|^2\eta$ converges to a fixed non-zero value given by $0.318/|\alpha|^2$. As discussed in~\cite{Dennis2002}, for such a phenomenological error model where the measurement error probability is fixed, it is still possible to find a threshold for the data errors. In Fig~\ref{fig:Pheno_threshold_curve}, we plot the threshold $p_{\text{data, th}}$ for data error probability as a function of a fixed measurement error value $p_{\text{meas}}$. More precisely, for fixed values of $p_{\text{meas}}$ between 1 and $20\%$, we numerically calculate $p_{\text{data, th}}$ such that the logical error probability $p_{\mathrm{Z_L}}$ after a MWPM decoding scales as $(p_{\text{data}}/p_{\text{data, th}})^{cd}$, with $c\approx 0.5$.
We note that, with a significant measurement error probability of $10$ to $20\%$, we can still expect error thresholds of a few percents on data qubits. Furthermore, increasing the measurement error in this range, we only slightly decrease the data error threshold value. This threshold is thus quite resilient to measurement errors.
\begin{figure}\label{fig:Pheno_threshold_curve}
\end{figure}
\subsection{Logical circuit simulations and overhead estimates \label{ssec:overhead_reducedtime}}
Similarly to the Subsection~\ref{ssec:QEC_Perf}, here we numerically simulate the error correction circuit plotted in the inset of Fig.~\ref{fig:QEC_sym_nbar8_fitted}. In this logical circuit, at each QEC cycle, the ancillas are prepared in the state $\ket{+}_C$ over a time duration of $1/\kappa_2$ and are also measured along their $X$ axis over a similar duration. The CNOT gates are also performed over the same duration of $1/\kappa_2$, and if not followed by an ancilla preparation and measurement step, we consider an additional qubit refreshing step of duration $1/\kappa_2$ to refocus the qubit state on the cat manifold, avoiding leakage-induced problems. Similarly to the previous subsection, the phase-flip error probabilities for the qubit preparation, measurement and idling steps are given by $p_{\mathrm{Z_a}}=p_{\mathrm{Z_d}}=|\alpha|^2\eta$. Also the phase-flip error probabilities for the CNOT gates are given in~\eqref{eq:error_model_symZ1} and~\eqref{eq:error_model_symZ1Z2} with $T=1/\kappa_2$.
The results of these simulations for the particular choice of mean photon number $|\alpha|^2=8$ and several code distances $d$ are plotted in Fig.~\ref{fig:QEC_sym_nbar8_fitted}. These results are to be compared to the case of the choice of $T^{\star}$ as the duration of the operations, plotted in Fig.~\ref{fig:PRA_fitted} and discussed in Subsection~\ref{ssec:QEC_Perf}. We can see that for this value of $|\alpha|^2=8$, the threshold $\eta_{\text{th}}$ has decreased from $7.6 \times 10^{-3}$ to $2.3 \times 10^{-3}$. This can be simply explained with the arguments in the end of Subsection~\ref{ssec:LRU}. Indeed, this threshold is over-estimated for the case $T=T^{\star}$ as close to this choice, the CNOT gates could induce important leakage that is neglected in the simulations of Fig.~\ref{fig:PRA_fitted}. However, the important observation is that below the new threshold value $\eta_{\text{th}}\approx 2.3 \times 10^{-3}$, in the regime where we benefit from the exponential suppression of the logical phase-flip with the code distance, the coefficient in the exponent has nearly doubled going from $c=2.58 \times 10^{-1}$ (close to one quarter) to $c=4.4\times 10^{-1}$ (close to one half).
So far, we have only considered the logical phase-flip error. In order to estimate the required overhead to reach a certain logical error rate, the bit-flip errors need to be taken into account. In the logical circuit (inset of Fig.~\ref{fig:QEC_sym_nbar8_fitted}), all operations have non zero bit-flip error probability. Nevertheless, by far, the most significant contribution to the bit-flip error is due to the CNOT gates and therefore we neglect the contribution of the other operations. For the CNOT gate, with the parameter $\eta$ in the typical range of values between $10^{-5}$ to $10^{-2}$ considered here, the probability of bit-flip type errors numerically fits the following ansatz $p_{\mathrm{X}}^{\mathrm{CNOT}}= 0.5 \times e^{-2 |\alpha|^2}$. Here $p_{\mathrm{X}}^{\mathrm{CNOT}}$ sums over all the possible error mechanisms leading to a bit-flip: $X_1$, $X_2$, $Y_1$, $Y_1X2$, $X_1X_2$, $Z_1X_2$, $Y_2$, $X_1Y_2$, $X_1Z_2$, $Y_1Y_2$, $Y_1Z_2$ and $Z_1Y_2$. Also, while the first term corresponds to bit-flip errors induced by single photon loss at rate $\kappa_1$, the second term (independent of $\eta$) corresponds to non-adiabatic bit-flip errors. Looking at the QEC circuit plotted in the inset of Fig.~\ref{fig:QEC_sym_nbar8_fitted}, an upper bound for the total bit-flip error probability per QEC cycle is therefore given by $p_{\mathrm{X_L}} = 2(d-1)p_{\mathrm{X}}^{\mathrm{CNOT}}$.
Now, the overall logical error probability per error correction cycle $\epsilon_{\mathrm{L}}$ can be upper bounded by $p_{\mathrm{X_L}}+p_{\mathrm{Z_L}}$. In Fig.~\ref{fig:overhead_fixedCNOT}, we plot, as a function of the figure of merit $\eta$, the minimum values of the code distance $d$ and of the mean photon number $|\alpha|^2$ leading to a logical error rate $\epsilon_{\mathrm{L}}=10^{-5}$, $10^{-7}$ and $10^{-10}$. This is to be compared to the choice of $T=T^{\star}$ for the operations in the logical circuit, plotted in Fig.~\ref{fig:PRA_overhead}. Once again one should note that the estimated overhead in this latter case needs to be taken with precaution as for larger values of $\eta \gtrsim 10^{-3}$, strong leakage has been neglected in these simulations. The addition of a qubit refreshing time as in Subsection~\ref{ssec:LRU}, would bring the required overhead closer to what is estimated in Fig.~\ref{fig:overhead_fixedCNOT} for these values of $\eta \gtrsim 10^{-3}$. More interestingly, for smaller values of $\eta$ below $10^{-4}$, the required code distance is smaller because of the better scaling of $p_{\mathrm{Z_L}}$ with $\eta$, as explained above. For example, for $\eta = 10^{-5}$, a logical error rate of $\epsilon_{\mathrm{L}}=10^{-10}$ is achieved using a repetition of $9$ cat qubits of size $|\alpha|^2\approx14$ versus a repetition code distance of 15 cat qubits of size $|\alpha|^2\approx13$ with the prior choice $T=T^{\star}$. The higher photon number can be explained by a higher bit-flip error probability coming from non adiabaticity in the context of faster gates, as can be seen in the insets of Figs. \ref{fig:PRA_overhead} and \ref{fig:overhead_fixedCNOT}.
\section{\mm{Accelerating measurement cycle with fast ancilla qubits}} \label{sec:Asymmetry}
In the previous section, we showed that the error correction performance of the ``cat qubit+repetition code'' architecture could be improved by accelerating the CNOT gates. In this section, we explore a different idea to further accelerate the error syndrome measurements.
The key fact we exploit here is that, from an experimental point of view, the difficult quantity to minimize is the ratio between the undesired single-photon loss (or other decoherence channels of the harmonic oscillator) and the engineered two-photon dissipation rate $\eta = \kappa_1 / \kappa_2$, while there is some flexibility to set the absolute value of these quantities ($\kappa_1$ and $\kappa_2$) by varying the specific circuit design of the cat qubit. The hint behind this fact is that circuit designs leading to high-Q modes with very low $\kappa_1$ usually rely on a strict isolation of the mode, making it harder to get a strong non-linear coupling to this mode. As a consequence, it is harder to engineer a two-photon exchange between this high-Q mode and a low-Q buffer mode, ultimately limiting the strength of the engineered two-photon dissipation $\kappa_2$. Motivated by this observation, we now assume an asymmetry between the dissipative rates of the ancilla and data cat qubits $\Theta:=\kappa_2^a/\kappa_2^d>1$, while keeping the value of $\eta=\kappa_1^a/\kappa_2^a=\kappa_1^d/\kappa_2^d$ fixed.
This section is structured as follows. First, we demonstrate how this extra freedom in the system parameters can be exploited to obtain a drastic improvement of error correction performance. The regime achieving this performance requires implementing fast CNOT gates with respect to the data cat qubit stabilization time $1/\kappa_2^d$, resulting in an important state leakage for the data qubits. Next, we investigate the two spurious effects induced by this leakage: the bit-flip errors induced by leakage accumulation and correlated measurement errors for phase-flip error correction. Via thorough numerical simulations including these effects, we argue that they are not detrimental to the operation of the code in this regime. In Subsection~\ref{ssec:asym_correlation}, by using an appropriate basis for the Hilbert space of the data cat qubits, we show that one can use a classical model for measurement error correlations. This observation makes the full Monte Carlo simulations of the logical error correction circuit including measurement correlations numerically tractable. Building on such numerical simulations, in Subsection~\ref{ssec:overhead_asym}, we compute the error correction overheads for the implementation presented in this section.
\begin{figure}\label{fig:leakage_vs_asym}
\label{fig:CNOT_em_vs_asym}
\end{figure}
\subsection{Asymmetric phase-flip error model and state leakage\label{ssec:asym}}
As detailed in subsection~\ref{ssec:CNOTerrormodel}, performing the CNOT gate in finite time creates non-adiabatic phase-flip errors on the ancilla qubit with probability $0.159 / |\alpha|^2 \kappa_{2} T$. When the ancilla and data cat qubits have different stabilization rates, $\Theta > 1$, one may wonder whether the gate should be adiabatic with respect to the slowest of the two timescales $1/\kappa_2^d$, or if it suffices to be adiabatic with respect to the fast timescale $1/\kappa_2^a$. We check the latter by numerically simulating the following evolution implementing a CNOT gate (in presence of single-photon loss)
\begin{equation} \label{eq:ME_init} \begin{split} \frac{d \hat{\rho}}{d t}\ =\ &
\kappa_{2}^a
\mathcal{D}[\hat{a}^{2}-|\alpha|^2]\hat{\rho}+\kappa_2^d
\mathcal{D}[\hat{L}_d(t)] \hat{\rho} \\
&+\kappa_1^a\mathcal{D}[\hat{a}]\hat{\rho}+\kappa_1^d
\mathcal{D}[\hat{d}] \hat{\rho}-i[\hat{H}, \hat{\rho}] \\ \end{split} \end{equation}
As discussed previously, we assume $\kappa_1^a/\kappa_2^a=\kappa_1^d/\kappa_2^d=\eta$, and we vary the asymmetry value $\Theta=\kappa_2^a/\kappa_2^d$, and the gate time is set to $T_{\mathrm{CNOT}}=1/\kappa_2^a$. The resulting phase-flip error probabilities are shown in Fig.~\ref{fig:CNOT_em_vs_asym}. The non-adiabatic phase-flip errors on the ancilla qubit only slightly increase with the asymmetry $\Theta$, which indicates that the gate time $T_{\mathrm{CNOT}}=1/\kappa_2^a$ is sufficiently slow with respect to the timescale of the ancilla qubit. On the data cat qubit, however, the phase-flips are only caused by single photon loss, which scale with the CNOT gate time as $p_{\mathrm{Z_d}} = |\alpha|^2 \kappa_1^d T_{\mathrm{CNOT}}/2 = |\alpha|^2 \eta / 2\Theta$. Therefore, for a fixed value of $\eta$ and for a gate time $ T_{\mathrm{CNOT}}=1/\kappa_2^a$, increasing the asymmetry results in decreasing the data phase-flip error probability.
Given the considerations of Subsection~\ref{ssec:pheno} on the resilience of QEC to measurement errors, it is expected that increasing the ratio $\Theta$ between the ancilla and data stabilization rates leads to an improvement of the error correction performance, as it results in a linear reduction of data phase-flip errors at the cost of a slight increase in measurement errors. At a heuristic level, this can be understood from the fact that the performance of QEC depends on the typical time at which syndrome information is extracted (here, $\propto 1/\kappa_2^a$) versus the typical time at which errors occur ($\propto 1/\kappa_1^d$) $$ \frac{\text{QEC cycle time}}{\text{quantum coherence time}} \propto \frac{\kappa_1^d}{\kappa_2^a} = \frac{\eta}{\Theta}. $$ Thus, by leveraging the system asymmetry $\Theta$, one can hope to achieve a higher threshold value for $\eta$.
While this approach seems promising, it creates an important issue that needs to be addressed. Indeed, by fixing the gate time according to the stabilization rate of the fast ancilla qubits, the data cat qubits see a much faster dynamics than their confinement rate, $T_{\mathrm{CNOT}} = 1 / \Theta\kappa_2^d \ll 1/\kappa_2^d$. As argued in Subsection~\ref{ssec:LRU}, such fast gates lead to an important amount of leakage outside the code space. This is numerically investigated in Fig.~\ref{fig:CNOT_em_vs_asym}, and as expected, increasing the system asymmetry, while fixing the gate time to be the inverse of the ancilla qubit stabilization rate, results in a constant leakage on the ancilla qubit but leads to an important amount of leakage on the data qubit.
As previously discussed in Subsection~\ref{ssec:LRU}, this can lead to two problems: leakage induced bit-flips on data qubits, and correlations in the measurement errors compromising the functioning of the repetition code error correction. In the next subsection, we investigate these two effects.
\subsection{Numerical investigation of leakage-induced bit-flips and measurement error correlations\label{ssec:num_asym}}
In order to investigate the effect of data leakage as a function of increased system asymmetry, we perform numerical simulations of repeated logical $X$ measurements according to the circuit depicted in Fig.~\ref{fig:z_correlations_circuit}. In this simulation, we focus on the non-adiabatic effects by neglecting other noise sources (i.e. we assume $\kappa_1^a = \kappa_1^d = 0$). The measurement is repeated $\Theta$ times (using integer values of the asymmetry $\Theta$ for simplicity) as the system asymmetry is increased, such that the total simulated time $T = 1/\kappa_2^d$ is fixed.
To check the impact of leakage on the bit-flip errors, the data cat qubit is initialized in $|0\rangle_C=(\ket{\mathcal{C}_\alpha^+}+\ket{\mathcal{C}_\alpha^-})/\sqrt{2}$ and the ancilla in the mixed state $\rho_a = (|+\rangle_C\langle+| + |-\rangle_C\langle-|)/2=(\ket{\mathcal{C}_\alpha^+}\bra{\mathcal{C}_\alpha^+}+\ket{\mathcal{C}_\alpha^-}\bra{\mathcal{C}_\alpha^-})/2$. The particular choice of fully mixed initial state for the ancilla qubit provides an average of the bit-flip error probabilities over all possible initial states. Also the dynamics for the data cat qubit is symmetric, which explains the choice of initial state $\ket{0}$. For each value of $|\alpha|^2$, and asymmetry $\Theta$, the probability of data bit-flip errors is calculated after $\Theta$ rounds of measurement. This is done by calculating the mean value of the invariant $J_Z$ on the data cat qubit, \mm{defined in \cite{NotesHouches} as $J_Z=J_{+-}+J_{-+}$}, and exponentially close to $\text{sign}(\hat x)$. As can be seen in Fig.~\ref{fig:data_bitflips}, even though the data cat qubit has an important amount of leakage due to the fast gates, the bit-flip error probability remains exponentially suppressed with the mean number of photons $|\alpha|^2$. More precisely, increasing the asymmetry leads to an increase in the absolute values of the bit-flip probabilities but does not significantly impact their exponential suppression. This can be qualitatively understood from the fact that the distortion of the state induced by the fast gates is local in phase space, while creating bit-flips requires a transfer of population between left and right half planes in the phase space.
The analysis of the impact of leakage on the measurement errors is more subtle. In this subsection, we investigate it with the same toy model simulation of the circuit in Fig.~\ref{fig:z_correlations_circuit}. Further analysis and simulations of the full QEC logical circuit are provided in the next subsection. Here, both ancilla and data qubits are initialized in the state $|+\rangle=\ket{\mathcal{C}_\alpha^+}$. In absence of errors, all $\Theta$ measurements would produce the outcome $+1$. However, the phase-flips on the ancilla qubit during the operation of the gate lead to measurement errors (note that in absence of other noise sources the data qubit does not undergo any phase-flip). More precisely, the CNOT gates, while inducing a leakage on data cat qubits, do not change the photon number parity which encodes the logical X operator. This leakage however compromises the functioning of the recurring CNOT gates, leading to further phase-flip errors in the ancilla qubit. \mm{This can be seen in Fig.~\ref{fig:control_PhaseFlips} where the probability of ancilla phase flip errors increases after each round of circuit execution. The more subtle effect of the data leakage is however that, this leakage surviving over many measurement rounds,} leads to correlated measurement errors. The impact of this correlation can be observed throughout a majority vote as shown in Fig.~\ref{fig:Xmeas_correlations}.
The probability of an incorrect majority vote (a majority of `$-1$' measurement outcomes) is plotted in Fig.~\ref{fig:Xmeas_correlations} (left-hand plot), with the label ``Quantum correlations'', as the master equation simulations correspond to a full quantum treatment. These simulations are to be compared to the right-hand plot in the same Figure~\ref{fig:Xmeas_correlations}, where the measurement error correlations are neglected by refocusing the data mode to the cat manifold after each measurement. The higher error probabilities in the left-hand plot reveal the impact of correlation: less information is extracted through each measurement.
The good news however is that even in presence of these correlations, increasing the system asymmetry $\Theta$ leads to a significant improvement of the overall measurement fidelity. Even though larger asymmetry increases the target state leakage, resulting both in individual lower CNOT gate fidelities and correlations between measurement errors, the fact that it is possible to repeat more measurements in the same amount of time $1/\kappa_2^d$ extracts more information. For instance, for a mean photon number of $|\alpha|^2 = 10$, the measurement infidelity in the symmetric case ($\Theta = 1$, and therefore a single measurement) is about $1.8 \times 10^{-2}$; while $\Theta = 11$ (and thus repeating the measurement 11 times) yields an effective measurement infidelity of $2.9\times 10^{-5}$.
As a conclusion, even though the leakage-induced correlations indeed reduce the global fidelity obtained from an increased measurement repetition rate, using an asymmetric ancilla-data system remains an
efficient strategy to obtain a high-fidelity effective measurement. In the next subsection, we go further in this analysis and explain how to capture such correlation effects in full QEC circuit simulations.
\subsection{\mm{Tractable model for leakage-induced correlations}\label{ssec:asym_correlation}}
In this subsection, we develop a model to perform circuit-level simulations of a repetition code while including the effect of state leakage. In previous works~\cite{Guillaud_2021,Chamberland2022}, the circuit-level simulations of concatenated `cat qubit + repetition code' were done in two steps: first, an effective Pauli error model was derived for the cat qubits. This was achieved with an analytical model reduction or using a master equation simulation. The goal of this first step is to reduce the description of error channels acting on the full Hilbert space of the harmonic oscillator to a description on the two-dimensional cat qubit manifold. The second step then consists in performing (efficient) sampling of the repetition code logical circuit using these effective error models.
In the present work, however, we are interested in the regime where the state of data cat qubits are highly deformed and thus cannot be treated as two-dimensional systems. Furthermore, we are specifically interested in investigating the effect of leakage-induced correlations on the logical error probability of the repetition code. Thus, it is crucial to use an enlarged (dimension $>2$) Hilbert space to capture the effect of state leakage.
The strategy used to perform such simulations is the following. First, we describe the system dynamics in a basis adapted to the cat qubit encoding, the so-called `Shifted Fock Basis' (SFB) introduced in ~\cite{Chamberland2022}. We argue that the quantum coherence created between such basis states can be safely neglected for the purpose of capturing the correlation effects. This assumption is justified both with numerical evidence and by making a model reduction (valid in the regime $\Theta \gg 1$) for which we show explicitly that the dynamics does not create significant coherence between these states. Under this assumption, the errors due to the CNOT process map a pure state to a classical mixture of such basis states. The circuit is then efficiently sampled by generating a random number to select one state of the classical mixture according to the corresponding probability distribution. We now detail these steps.
For a detailed introduction to the SFB, we refer the reader to the Appendix C of~\cite{Chamberland2022} and we only recall the basics here for self-completeness. The basis is built using two families of `cat-like' states of well defined photon-number parity based on displaced Fock states $$
|\phi_{\pm, n}\rangle := \tfrac{1}{\sqrt2}[\mathcal{D}(\alpha)\pm(-1)^n\mathcal{D}(-\alpha)]|n\rangle. $$
These states are not normalized (but their norm is exponentially close to 1 in the limit $|\alpha|^2 \gg 1$), and the cat qubit subspace is spanned by $|\phi_{\pm, 0}\rangle$. The index $\pm$ refers to the photon-number parity of the associated state, and the index $n$ refers to the excitation number out of the cat qubit subspace. Following the subsystem decomposition idea in~\cite{Pantaleoni_2020}, the basis states may be written as $|\phi_{\pm, n}\rangle = |\pm\rangle \otimes |n\rangle$ where the first state in the tensor product refers to the state of a logical encoded qubit and the second one refers to a gauge mode. Noting that the annihilation operator of the original mode $\hat{a}$ acts as $\hat{a} |\phi_{\pm, n}\rangle = \sqrt{n}|\phi_{\mp, n-1}\rangle + \alpha |\phi_{\mp, n}\rangle$, in the SFB it writes $\hat{a} = Z_a\otimes(\hat{b}_a + \alpha)$ where, $Z_a$ represents the Pauli $Z$ operator on the encoded qubit, and $\hat b_a$ represents the annihilation operator of the virtual gauge mode. In this basis, assuming $\kappa_1^a=\kappa_1^d=0$, the undesired part of the dynamics associated to the CNOT process (obtained by going to an appropriate rotating frame~\cite{Chamberland2022} in~(\ref{eq:ME_init})) can be approximated by \begin{equation} \begin{aligned} \frac{d\rho}{dt}&=-i\frac{\pi}{4T}\left[Z_a\otimes(\hat b_a\hat b_d+\hat b_a^\dag\hat b_d^\dag),\rho\right]\\
&+4\kappa_2^a|\alpha|^2\mathcal{D}[\hat b_a]\rho\\
&+4\kappa_2^d|\alpha|^2\mathcal{D}[Z_a(2\pi t/T)\otimes \hat b_d]\rho, \end{aligned} \label{eq:ME_CNOT_SFB} \end{equation} where $Z_a(\theta)=\exp(i\theta Z_a/2)$. In this approximation, detailed in the Appendix D of~\cite{Chamberland2022} (equation (D26)), we consider at most one excitation in the gauge modes and weak couplings are also neglected. More precisely, the above approximate model only makes sense up to the first excitation of the gauge modes $\hat b_a$ and $\hat b_d$. Therefore the gauge modes $\hat b_a$ and $\hat b_d$ can be replaced by gauge qubits $\hat\sigma^g_{a,-}$ and $\hat\sigma^g_{d,-}$. Here, noting furthermore that $\kappa_2^a\gg \kappa_2^d$, we adiabatically eliminate the ancilla gauge qubit $\hat\sigma^g_{a,-}$, while keeping the data gauge qubit $\hat\sigma^g_{d,-}$.
\begin{figure*}\label{fig:QEC_circuit_asym}
\end{figure*}
This leads to the effective master equation \begin{equation}\begin{aligned}
\frac{d\rho}{dt}=&4\kappa_2^d|\alpha|^2\mathcal{D}[Z_a(2\pi t/T)\otimes \hat\sigma^g_{d,-}]\rho\\
&+\frac{\pi^2}{16|\alpha|^2\kappa_2^aT^2}\mathcal{D}[Z_a\otimes\hat\sigma^g_{d,+}]\rho. \end{aligned}\end{equation}\label{eq:Eff_ME_CNOT_SFB} We note that for an initial state of the form $$ \rho_{\text{in}}=\rho_{in,0}^a\otimes\ket{0}_g^d\bra{0}+\rho_{in,1}^a\otimes\ket{1}_g^d\bra{1} $$ the solution remains of the same form (i.e. diagonal with respect to the data gauge qubit), with $\rho_0$ and $\rho_1$ satisfying \begin{align} \frac{d}{dt}\rho^a_0&=r_1Z_a\left(\frac{2\pi t}{T}\right)\rho^a_1Z_a\left(-\frac{2\pi t}{T}\right)-r_2\rho^a_0\notag\\ \frac{d}{dt}\rho^a_1&=r_2 Z_a\rho^a_0 Z_a-r_1\rho^a_1, \end{align}
with $r_1=4\kappa_2^d|\alpha|^2$ and $r_2=\pi^2/16|\alpha|^2\kappa_2^aT^2$.
The above observation essentially means that the data gauge qubit can be treated as a classical memory bit. More precisely, during each CNOT operation, starting from any state 0 or 1, this classical bit can either stay in its initial state or switch to the other state and this is accompanied with the application of an appropriate partial Kraus map on the ancilla qubit. We denote these Kraus maps as $\mathbb{K}_{0\rightarrow0}$, $\mathbb{K}_{0\rightarrow1}$, $\mathbb{K}_{1\rightarrow0}$, and $\mathbb{K}_{1\rightarrow1}$, and note that starting from the ancilla qubit state $\rho^a$ and data gauge bit 0, the data gauge bit remains in the state $0$ with probability $\text{tr}\left[\mathbb{K}_{0\rightarrow0}(\rho)\right]$ and switches from $0$ to $1$ with probability $\text{tr}\left[\mathbb{K}_{0\rightarrow1}(\rho)\right]$. The validity of these assertions is checked by simulations of Fig.~\ref{fig:Xmeas_correlations}. In these simulations, labeled ``Classical correlations'', we simulate the circuit of Fig.~\ref{fig:z_correlations_circuit}, but this time by treating the data gauge mode as a classical bit. This is done by neglecting the off-diagonal elements of the density matrix in the evolution of the master equation. Such a classical treatment of the data gauge modes will become more clear in the following discussion of the QEC circuit simulations.
In each error correction round, the ancilla qubits, initialized in $\ket{+}\bra{+}$, undergo two such Kraus maps associated with the adjacent data gauge qubits. These ancilla qubits are finally measured in the $X$ basis. More precisely, the state of ancilla qubit $a$ adjacent to two data gauge bits $d$ and $d'$, undergoes the Kraus maps $\mathbb{K}_{i'\rightarrow j'}\circ \mathbb{K}_{i\rightarrow j}$, before being measured in the $X$ basis. Here $i$ and $j$ (resp. $i'$ and $j'$) are initial and final states of the data gauge bit $d$ (resp. $d'$). To perform efficient circuit-level simulations of the repetition code while accounting for state leakage, one therefore only needs to estimate the values $$ p_{i'\rightarrow j',i\rightarrow j}=\bra{-}\mathbb{K}_{i'\rightarrow j'}\circ \mathbb{K}_{i\rightarrow j}(\ket{+}\bra{+})\ket{-}. $$ These values correspond to the probabilities of an erroneous measurement, conditioned to the classical gauge bits $d$ and $d'$ switching respectively from states $i$ and $i'$ to the states $j$ and $j'$. In practice, we evaluate the above probabilities by simulating twice the master equation~\eqref{eq:ME_CNOT_SFB} associated to a CNOT gate, once with data gauge mode $d$ and once with data gauge mode $d'$, where the ancilla gauge mode is initialized in $\ket{0}\bra{0}$ and the ancilla qubit in $\ket{+}\bra{+}$. Furthermore, the data gauge modes $d$ and $d'$ are initialized in $\ket{i}$ and $\ket{i'}$ and we calculate the final population of $\ket{-}_a\otimes\ket{j}_d\otimes\ket{j'}_{d'}$.
\subsection{Overhead estimates \label{ssec:overhead_asym}}
Using the efficient sampling of the repetition code, we perform Monte Carlo simulations of the repetition code to estimate the thresholds and logical phase-flip error rates for increasing system asymmetry. For a symmetric system ($\Theta$=1), the common practice is to repeat $d$ times the stabilizer measurements (followed by a final perfect measurement of the stabilizers to ensure projection over the logical code space) to estimate the logical error probability, where $d$ is the code distance. For an asymmetric system ($\Theta$>1), the temporal correlations between measurement errors result in a decrease of the effective 'time' distance of the code, such that estimating the threshold on a time window of $d$ rounds would be inaccurate. Instead, we replace each of the $d$ stabilizer measurement rounds by a block of $\Theta$ `fast' stabilizer measurements, such that the logical error probability is evaluated over a constant total time, even when the asymmetry is increased. After each block of $\Theta$ `fast' stabilizer measurements, a refreshing time of duration $1/\kappa_2^d$ is inserted on the data cat qubits to remove the leakage. The simulated circuit is summarized in Figure~\ref{fig:QEC_circuit_asym}.
\begin{figure*}\label{fig:thresholds}
\end{figure*}
The simulation results are fitted to the empirical formula \begin{multline}\label{eq:empirical}
p_{\mathrm{Z_L}}(d, \eta, |\alpha|^2, \Theta) \approx \\
a(|\alpha|^2, \Theta) \left(\frac{\eta}{\eta_{\text{th}}(|\alpha|^2, \Theta)}\right)^{c(|\alpha|^2, \Theta)(d+1)} \end{multline}
to estimate the phase-flip threshold $\eta_{\text{th}}$, the prefactor $a$, and the scaling coefficient $c$, for different system asymmetries $\Theta$ and different values of the mean photon number $|\alpha|^2$. These estimations are depicted in Figure~\ref{fig:thresholds}. As expected, the threshold increases with the system asymmetry. Each block of $\Theta$ rounds of stabilizer measurements (replacing a single round in the symmetric case) implements an effective high fidelity stabilizer measurement (as in the case of the X measurement of the previous subsection, see Figure~\ref{fig:Xmeas_correlations}).
\begin{figure*}\label{fig:overhead_asym}
\end{figure*}
Finally, we estimate as in Subsection~\ref{ssec:overhead_reducedtime} the overhead required to achieve a per cycle logical error rate of $\epsilon_{\mathrm{L}} = 10^{-5}, 10^{-7}, 10^{-10}$.
\fm{More precisely, we first use the above fit~\eqref{eq:empirical} for the logical phase-flip probability, at fixed $(|\alpha|^2, \Theta)$, to extrapolate $p_{\mathrm{Z_L}}$ for larger code distances $d$ and smaller figure of merits $\eta$.}
We also estimate the per cycle bit-flip error probability $p_{\mathrm{X_L}}(d, \Theta, |\alpha|^2) \propto d\exp(-2|\alpha|^2)$ (which we find numerically to be dominated by the non-adiabatic bit-flips during the CNOT gates). Finally, for each value of $\eta$, we numerically optimize the code distance $d$ and the average number of photons to achieve $p_{\mathrm{X_L}} + p_{\mathrm{Z_L}} \leq \epsilon_{\mathrm{L}}$. The resulting overheads, for different values of the system asymmetry $\Theta$, are summarized in Figure~\ref{fig:overhead_asym}.
As one could expect from the increase in the repetition code threshold (Figure~\ref{fig:thresholds}), increasing the system asymmetry improves drastically the performance of the repetition code. For instance, for a fixed value of $\eta = 10^{-3}$, a logical error probability of $10^{-10}$ cannot be achieved for a symmetric system, but can be attained with $d = 25$ data cat qubits of size $|\alpha|^2 \approx 16$ per logical qubit for a system asymmetry of $\Theta = 20$.
\section{Conclusions and further discussions} \label{sec:Conclusion}
\titleParagraph{ccl}
\mm{In this work, we proposed and analyzed the acceleration of parity measurement cycle in repetition cat qubits as a means to drastically improve its error correction performance. This acceleration includes two ingredients.}
\mm{ The first ingredient consists in accelerating the CNOT gate, which decreases the fidelity of the gate but perhaps counter-intuitively, improves the overall performance of the code. We explain this improvement by the asymmetric (between control and target) error model of the CNOT gate for cat qubits, and by the fact that the repetition code is more robust to measurement errors than to errors damaging the encoded information. By accelerating the CNOT operations, one however needs to carefully consider the cat qubits state leakage outside the computational subspace. We have analyzed the effects of this state leakage and shown how it can be mitigated by adding appropriate qubit refreshing time steps in the logical circuit.}
The second ingredient relies on an asymmetric architecture, where we assume that the typical dissipative rates (both of the stabilization, and of the typical decoherence) of the ancilla cat qubits can be made larger than those of the data cat qubits. To analyze the performance of the repetition code in this regime where data cat qubits suffer from important state leakage, we introduce a new numerical method that allows to efficiently sample the repetition code under a circuit-level noise model, while taking into account the leakage of the cat qubits. The crux of this method was to develop a classical model of correlations that faithfully captures the effect of the leakage-induced correlations in measurement outcomes. We find that this scheme achieves close-to-optimal performance of the repetition code, leading to high values of the phase-flip threshold (\textit{e.g} $\eta_{\text{th}} \approx 1\%$ for $|\alpha|^2 = 8$ photons).
\mm{This proposal is very much inspired by the experimental observation that while the figure of merit parameter $\eta=\kappa_1/\kappa_2$ is hard to decrease, there is some room for varying the absolute values of these loss rates $\kappa_1$ and $\kappa_2$. One can for instance think of an architecture where the data cat qubits are hosted in extremely high-Q 3D cavity modes, and where the ancilla ones are hosted in lower-Q 2D resonators. The performances observed through the Monte Carlo simulations of this paper are encouraging for such a concatenated and asymmetric architecture.}
Throughout this work, we have analyzed exclusively the logical performance of a quantum memory. One may legitimately wonder if the same conclusions still apply to the case of logical gate implementations for repetition cat qubits. In this architecture~\cite{Guillaud_2019,Guillaud_2021,Chamberland2022} two types of gate implementations can be distinguished: the transversal ones such as the CNOT gate and the non-transversal ones such as the Toffoli. For the transversal implementations, we expect a similar improvement in the logical performance with fast noisy gates. More precisely, while there is no interest in accelerating the CNOT operations between the data qubits in two code blocks, the parity-check CNOTs in each block can still benefit from the same acceleration. The CNOT gate errors between data qubits merely act as the input errors of a memory. The code being resilient to these input errors, the logical fidelity of the transversal gate is mainly limited by the performance of the error correction circuit. The analysis for the non-transversal implementations is less straight-forward and requires further investigation. However, we believe that with some modifications, these implementations can also benefit from overhead reduction using fast noisy parity-checks.
\mm{One possible direction for extending this work is to consider biased noise tailored codes that have some bit-flip error correction capability~\cite{Chamberland2022, Darmawan2021}. In this case, one needs to keep in mind that the measurement of $Z$-stabilizers would require CZ or CNOT gates with data qubits as control ones. The data qubits are thus necessarily affected by the non-adiabatic errors and as such one cannot rely on fast low-fidelity gates for $Z$-stabilizer measurements. It should however be possible to rely on two time scales, one fast for $X$-stabilizer measurements as they need to compete with high-rate $Z$ errors, and one slow for $Z$-stabilizers competing with rare $X$ errors.}
\section{Acknowledgments}
We thank Alain Sarlette for insightful discussions. We acknowledge funding from the Plan France 2030 through the project ANR-22-PETQ-0006.
\end{document} | arXiv |
Show the Cauchy-Schwarz inequality holds on a Hilbert space
How would one go about showing this? Its a question in one of the workbooks but it doesn't provide an answer. Any help would be appreciated.
hilbert-spaces
$\begingroup$ Note that for all $t\in\mathbb{R}$, you have $\lVert x-ty\rVert^2 \geqslant 0$. Choose $t$ so that the inequality drops out. $\endgroup$ – Daniel Fischer Mar 10 '14 at 12:20
$\begingroup$ Gowers has some interesting remarks about the inequality and its proofs on his website. $\endgroup$ – Did Mar 10 '14 at 12:33
$\begingroup$ Possible duplicate of math.stackexchange.com/questions/436559/… $\endgroup$ – littleO Mar 10 '14 at 19:09
Somehow, on the whole internet, it seems that the simplest proof of Cauchy- Schwarz has yet to be recorded. At least I couldn't find it after several minutes of searching... The most prominent is certainly the proof mentioned by Daniel Fischer in this comment above, but that always seemed quite contrived to me. Here is the ``best'' proof imho:
let $x,y$ be unit vectors.
Then $\langle x-y,x-y \rangle = |x|^2-2\langle x,y\rangle+|y|^2 \geq 0$
so $\langle x,y \rangle \leq 1$
Now for any two nonzero vectors, $x,y$ (if one is $0$ the result is trivial), we have that
$\left\langle \frac{x}{|x|},\frac{y}{|y|}\right\rangle \leq 1$ by the result above.
So $\langle x,y \rangle \leq |x||y|$
Of course, we also need to show that $\langle x,y \rangle \geq -|x||y|$, but I will leave it to you to see how to modify the argument to obtain this inequality.
Steven GubkinSteven Gubkin
$\begingroup$ You have to show that $\langle x,y\rangle\ge-1$ in case both vectors are unit vectors, too. $\endgroup$ – Michael Hoppe Mar 10 '14 at 14:44
$\begingroup$ @MichaelHoppe I hope that it is clear how to establish that in much the same way...Didn't want to give it all away to OP. $\endgroup$ – Steven Gubkin Mar 10 '14 at 16:03
$\begingroup$ Well done, but in my opinion a hint about the second part would not be unnecessary. $\endgroup$ – Michael Hoppe Mar 10 '14 at 17:26
$\begingroup$ I will add it to the post. $\endgroup$ – Steven Gubkin Mar 10 '14 at 18:34
$\begingroup$ It's worth noting that this proof does not work as nicely for complex Hilbert spaces, in which $||x||^2 + ||y||^2 - 2\mathfrak{Re}\langle x,y \rangle \geq 0$. $\endgroup$ – Ben Bray Mar 11 '18 at 3:10
I am not sure what is the "best" proof of this famous inequality, but I found the following remark helpful, at least conceptually.
For a 2 dimensional Hilbert space, i.e. the usual Euclidean plane of highschool math, the inequality is quite elementary and intuitive, by some drawing, or even working in coordinates, it is straighfword to show that $(ac+bd)^2\leq (a^2+b^2)(c^2+d^2)$.
Now the remark is that for a general pair of vectors in a whatever dimension Hilbert space, you can consider the two dimensional subspace spanned by these two vectors (supposing they are not linearly dependent, in which case there is nothing to prove). We are reduced to the previous case.
In other words, it is really a plane geometry inequality, nothing more.
Gil BorGil Bor
As the Gramian matrix of $(x,y)$, namely $G(x,y):=\begin{pmatrix}\langle x,x\rangle & \langle x,y\rangle\\ \langle y,x\rangle & \langle y,y\rangle\end{pmatrix}$, is well known to be positiv-semidefinit, we know that $\det\bigl(G(x,y)\bigr)\ge0$ and equality holds iff $x\parallel y$.
Michael HoppeMichael Hoppe
$\begingroup$ Cauchy-Schwarz is probably even more "well known" than this. $\endgroup$ – Did Mar 10 '14 at 17:27
$\begingroup$ Very nice, but uses a lot of machinery. It is elementary to show that it is positive semidefinite, but then I think we need the real spectral theorem to make the claim that the determinant is positive. And we must develop the theory of determinants, eigenvectors, bilinear forms, etc all before this. Still very neat! $\endgroup$ – Steven Gubkin Mar 10 '14 at 18:46
$\begingroup$ Thanks. A student discoverin Hilbert spaces is supposedto know of that "machinery". And why not use it as in the following proof for $\sqrt[2]{3}$ is irrational. Suppose $2=p^3/q^3$. Then $q^3+q^3=p^3$ which is impossible due to a result of Wiles. $\endgroup$ – Michael Hoppe Mar 11 '14 at 8:11
Not the answer you're looking for? Browse other questions tagged hilbert-spaces or ask your own question.
A natural proof of the Cauchy-Schwarz inequality
Orthonormal Family in a Hilbert Space
Proving a system is linearly independent in a Hilbert Space
Cauchy-Schwarz Inequality by sum of squares.
Demonstration of Cauchy-Schwarz inequality using Minkovski inequality
Determining $x$ in a Hilbert space s.t $\langle x,e_n \rangle = 1/\sqrt{n}$, where $(e_n)_{n=1}^{\infty}$ is an orthonormal basis.
Finding the closest function to another in a Hilbert space.
Prove of inequality under a Hilbert space.
Cauchy-Schwarz Inequality (Hilbert Spaces)
Domain of a bounded self-adjoint operator on a Hilbert space
Compact operator on a Hilbert space | CommonCrawl |
\begin{document}
\title{Cell cycle length and long-time behaviour of an age-size model} \author[K. Pich\'or]{Katarzyna Pich\'or} \address{K. Pich\'or, Institute of Mathematics, University of Silesia, Bankowa 14, 40-007 Kato\-wi\-ce, Poland.} \email{[email protected]} \author[R. Rudnicki]{Ryszard Rudnicki} \address{R. Rudnicki, Institute of Mathematics, Polish Academy of Sciences, Bankowa 14, 40-007 Katowice, Poland.} \email{[email protected]} \keywords{Cell cycle, size-age structured model, semigroup of operators, asynchronous exponential growth} \subjclass[2020]{Primary: 47D06; Secondary: 35F15, 45K05 92D25, 92C37} \date{March 19th, 2021}
\begin{abstract} We consider an age-size structured cell population model based on the cell cycle length. The model is described by a first order partial differential equation with initial-boundary conditions. Using the theory of semigroups of positive operators we establish new criteria for an asynchronous exponential growth of solutions to such equations. We discuss the question of exponential size growth of cells. We study in detail a constant size growth model and a model with target size division. We also present versions of the model when the population is heterogeneous. \end{abstract}
\maketitle
\section{Introduction} \label{intro} The cell cycle is a series of events that take place in a cell leading to its replication. It is regulated by a complex network of protein interactions~\cite{Morgan}. Modern experimental techniques concerning the cell cycle \cite{CSK,I-B,MC,Perego,T-A,Taniguchi,Vittadello-exp,Wang} allow us not only to understand processes inside single cells, but also to build more precise cellular populations models.
Most of populations are usually heterogeneous. Thus it is important to consider the distribution of the population according to some significant parameters such as age, size, maturity, or proliferative state of cells. Models of this type are called structured. This type of models are usually represented by partial differential equations with some nonlocal perturbations and specific boundary conditions. Knowing the length of cell cycle allows us to predict the development of unicellular populations and tissues growth and maintenance.
The aim of the paper is twofold. Firstly, to construct an age-size structured model assuming that we know the growth of individual cells and the distribution of the cell cycle length. Secondly, to study the long-time behaviour of the solution of this model.
We consider a model which is based on the following assumptions. The population grows in steady-state conditions. Cells can be described by their age $a$ and size $x$ alone and reproduction occurs by fission into two equal parts. The distribution of the cell cycle length depends only on the initial size $x_b$ of a cell. The velocity of growth of an individual cell depends only on its size $x$, i.e. $x'(t)=g(x(t))$. We also assume that sizes of cells and cell cycle durations are bounded above and bounded away from zero. Moreover, we assume that the initial daughter cell sizes are distributed in some interval which contains the mother initial size. We formulate a mathematical model which describes the time evolution of distribution of cellular age and size. The model consists of a partial differential equation with an integral boundary condition and an initial condition. The novelty of our model is that we use the distribution of the length of the cell cycle, instead of a size dependent probability of division usually used in size-structured models \cite{BA,DHT,Doumic,GH,GW,Heijmans}. Such probability is difficult to measure experimentally in contrast to the length of the cell cycle.
We check that the solutions of our model generate a continuous semigroup of operators $\{U(t)\}_{t\ge 0}$ on some $L^1$ space. Under additional assumption that $g(2x)\ne 2g(x)$ for some $x$, we prove that the semigroup $\{U(t)\}_{t\ge 0}$ has asynchronous exponential growth (AEG), i.e. \begin{equation} \label{AEG1} e^{-\lambda t}U(t)u_0(x_b,a)\to Cv(x_b,a)\quad \textrm{for $t\to\infty$}, \end{equation} where $\lambda$ is the Malthusian parameter and $v$ is a stable initial size and age distribution,
which does not depend on the initial distribution $u_0$. The property AEG plays an important role in the study of structured population models \cite{ASW,DHT,GH,Webb-cc},
because we can expect that the real process should be close to a stationary state and then it is easy to estimate biological parameters~\cite{LRB}.
The proof of AEG of $\{U(t)\}_{t\ge 0}$ is based on the reduction of the problem to a stochastic (Markov) semigroup \cite{LiM} by using the Perron eigenvectors and on the theorem that a partially integral stochastic semigroup having a unique invariant density is asymptotically stable \cite{PR-jmaa2}. A similar technique was applied to study other population models \cite{BPR,Pichor-MCM,RP} and to some piecewise deterministic Markov processes \cite{Mac-Tyr,PR-cell-cyc,RT-K-k}.
We note that AEG property can be proved by using known results on compact semigroups but it seems to be difficult to check compactness and analyze the spectrum of the generator of our semigroup. It is interesting that even nonlinear models of cell population (cf. \cite{M-R,RP-M}) can be reduced to stochastic semigroups.
The last two sections contain corollaries from our results (Section~\ref{s:remarks}) and some remarks concerning other models and experimental data (Section~\ref{s:other2}). One of the main points of these sections is what can happen when $g(2x)=2g(x)$ for all $x$\,? This is an important question because it is usually assumed that the size (volume) of a cell grows exponentially, which means that $g(x)=\kappa x$ and in this case $g(2x)=2g(x)$. If we include in a model the assumption that $g(x)=\kappa x$, then we can obtain some paradoxical results. For example, if the size discrepancy between newborn cells is small, then the descendants of one cell can have the same size at the same time and the size of the population does not grow exponentially even in steady-state conditions. Of course the law of exponential size growth is statistical in nature and we can modify it by considering some random fluctuations in the growth rate. Another problem considered in Section~\ref{s:other2} is how to incorporate into our description some models of the cell cycle: a \textit{constant $\Delta$ model} and a \textit{model with target size division}. Finally, we present versions of the model when the population is heterogeneous, e.g. with an asymmetric division or with fast and slow proliferation.
\section{Model} \label{s:model} We consider the following model of the cell cycle. Denote, respectively, by $a$, $x_b$, and $x$ --- the age, the initial size, and the size of a cell. We assume that $\underline x_b$ and $\overline x_b$ are the minimum and maximum sizes of newborn cells. We also assume that cells age with unitary velocity and grow with a velocity $g(x)$, i.e. if a cell has the initial size $x_b$, then the size at age $a$ satisfies the equation \begin{equation} \label{grow} x'(a)=g(x(a)),\quad x(0)=x_b.
\end{equation} We denote by $\pi_ax_b$ the solution of (\ref{grow}). The length $\tau$ of the cell cycle is a random variable which depends on the initial cell size $x_b$; has values in some interval $[\underline a(x_b),\overline a(x_b)]$; and has the probability density distribution $q(x_b,a)$, i.e. the integral $\int_0^{A} q(x_b,a)\, da$ is the probability that $\tau\le A$. According to the definition of $q$, if a cell has the initial size $x_b$, then $\Phi(x_b,a)=\int_a^{\infty} q(x_b,r)\,dr$ is its \textit{survival function}, i.e. $\Phi(x_b,a)$ is the probability that a cell will not split before age $a$. We assume that if the mother cell has size $x$ at the moment of division, then the daughter cells have size $x/2$, i.e. if the initial size of the mother cell is $x_b$ and $\tau=a$ is the length of its cell cycle, then the initial size of the daughter cell is $S_{a}(x_b)=\tfrac12\pi_ax_b$.
Now we collect the assumptions concerning the functions $g$ and $q$ used in the paper: \vskip1mm
\noindent (A1) $g\colon [\underline x_b,2\overline x_b]\to (0,\infty)$
is a $C^1$-function,
\noindent (A2) $q\colon [\underline x_b,\overline x_b]\times [0,\infty)\to [0,\infty)$ is a continuous function and for each $x_b$ the function $a\mapsto q(x_b,a)$ is a probability density,
\noindent (A3) $0<\underline a(x_b)<\overline a(x_b)<\infty$, $q(x_b,a)>0$ if $a\in (\underline a(x_b),\overline a(x_b))$, and $q(x_b,a)=0$ if $a\notin (\underline a(x_b),\overline a(x_b))$ for each $x_b\in [\underline x_b,\overline x_b]$,
\noindent (A4) $x_b\mapsto \underline a(x_b)$ and $x_b\mapsto \overline a(x_b)$ are continuous functions,
\noindent (A5) $S_{\underline a(x_b)}(x_b)\ge \underline x_b$ and $S_{\overline a(x_b)}(x_b)\le \overline x_b$ for each $x_b\in [\underline x_b,\overline x_b]$,
\noindent (A6) $S_{\underline a(x_b)}(x_b)<x_b<S_{\overline a(x_b)}(x_b)$ for each $x_b\in (\underline x_b,\overline x_b)$.
Fig.~\ref{r:cell-cyc1} and Fig.~\ref{r:cell-cyc2} illustrate our assumptions. Only assumption (A6) needs some explanation. We assume that a daughter cell can have the same initial size as the initial size of a mother cell. In Section~\ref{s:asyp-beh} we will add an extra assumption (A7) which will be used only to show the long-time behaviour of the distribution of $(x_b,a)$. \begin{figure}
\caption{An example of functions $x_b\mapsto \underline a(x_b)$ and $x_b\mapsto \overline a(x_b)$. The function $q$ is positive between the graphs of these functions.}
\label{r:cell-cyc1}
\end{figure}
\begin{figure}
\caption{The relation between the initial sizes of mother and daughter cells (A5,A6).}
\label{r:cell-cyc2}
\end{figure}
Assume that a cell with initial size $x_b$ and age $a$ splits in the time interval of the length $\Delta t$ with probability $p(x_b,a)\Delta t+o(\Delta t)$, i.e. \[ p(x_b,a)=\lim_{\Delta t\downarrow 0} \frac{\operatorname P(\tau\in [a,a+\Delta t] \mid \tau\ge a)}{\Delta t}. \] Since $\Phi(x_b,a)=\exp\big(-\int_0^a p(x_b,r)\,dr\big)$, an easy computation shows that \begin{equation*}
\begin{aligned} q(x_b,a)&=p(x_b,a)\exp\big(-\textstyle{\int_0^a} p(x_b,r)\,dr\big),\\ \quad p(x_b,a)&=\frac{q(x_b,a)}{\int_a^{\infty} q(x_b,r)\,dr} \end{aligned}
\end{equation*} for $a<\overline a(x_b)$. As $\Phi(x_b,\overline a(x_b))=0$, we have $\int_0^{\overline a(x_b)}p(x_b,a)\,da=\infty$.
In order to derive a master equation for the distribution of the population with respect to $x_b$ and $a$ we need to introduce a family of Frobenius-Perron operators which describe the relation between the initial sizes of mother and daugther cells.
Let $f(x_b)$ be the density of initial sizes of mother cells that have the fixed length of cell cycle $\tau=a$ for some $a\in (\underline a,\overline a)$, where $\underline a=\min\underline a(x_b)$ and $\overline a=\max\overline a(x_b)$. Denote by $P_af(x_b)$ the density of initial sizes of daughter cells. \begin{lemma} Let $x_a$ be the minimum initial size of cells which can split at age $a$. Then \begin{equation} \label{F-P7} P_af(x_b)=\frac{2g(\pi_{-a}(2x_b))}{g(2x_b)}f(\pi_{-a}(2x_b))\mathbf 1_{[S_a(x_a),\overline x_b]}(x_b). \end{equation} \end{lemma} \begin{proof} Observe that $x_a=\underline x_b$ if $S_a(\underline x_b)\ge \underline x_b$ or $x_a=S_a^{-1}(\underline x_b)$ otherwise. It is clear that $q(x_b,a)=0$ for $x_b< x_a$. We have \begin{equation*}
\int_{S_a(x_a)}^{S_a(y)}P_af(x_b)\,dx_b=\int_{x_a}^{y}f(x_b)\,dx_b\quad \textrm{for $y\ge x_a$}
\end{equation*} or, equivalently, \begin{equation} \label{F-P2} \int_{S_a(x_a)}^{x}P_af(x_b)\,dx_b=\int_{x_a}^{S_a^{-1}(x)}f(x_b)\,dx_b\quad \textrm{for $x\ge S_a(x_a)$}.
\end{equation} From (\ref{F-P2}) it follows that \begin{equation*}
P_af(x_b)=\frac{d}{dx_b}\big(S_a^{-1}(x_b)\big)f\big(S_a^{-1}(x_b)\big)\mathbf 1_{[S_a(x_a),\overline x_b]}(x_b).
\end{equation*} Using the formula $S_a^{-1}(x_b)=\pi_{-a}(2x_b)$ we check that \begin{equation} \label{F-P4} \frac{d}{dx_b}\big(S_a^{-1}(x_b)\big)=\frac{2g(\pi_{-a}(2x_b))}{g(2x_b)}.
\end{equation} In order to show \eqref{F-P4} we introduce two functions: \[ \varphi(x_b,a)=\pi_{-a}(2x_b)\quad\textrm{and}\quad \psi(x_b,a)=\frac{\partial \varphi}{\partial x_b}(x_b,a). \] From $\varphi(x_b,0)=2x_b$ we obtain $\psi(x_b,0)=2$. Since $\frac{\partial \varphi}{\partial a}(x_b,a)=-g(\varphi(x_b,a))$, we have \begin{align*} \frac{\partial \psi}{\partial a}(x_b,a) &=\frac{\partial }{\partial a} \frac{\partial \varphi}{\partial x_b}(x_b,a) =\frac{\partial }{\partial x_b}\frac{\partial \varphi}{\partial a} (x_b,a)\\ &=\frac{\partial }{\partial x_b}\big(-g(\varphi(x_b,a))\big) =-g'(\varphi(x_b,a))\psi(x_b,a). \end{align*} We have received the linear equation $\partial\psi/\partial a= -g'(\varphi(x_b,a))\psi$ with the initial condition $\psi(x_b,0)=2$ which has the solution \begin{equation*}
\psi(x_b,a)=2\exp\bigg(-\int_0^ag'(\varphi(x_b,r))\,dr \bigg).
\end{equation*} Substituting $y=\varphi(x_b,r)$ we receive $dy/dr=-g(y)$ and \begin{equation*}
\psi(x_b,a)=2\exp\bigg(\int_{2x_b}^{\pi_{-a}(2x_b)} \frac{g'(y)}{g(y)}\,dy \bigg)=\frac{2g(\pi_{-a}(2x_b))}{g(2x_b)}. \qedhere
\end{equation*}
\end{proof} The formula \eqref{F-P7} defines a family of operators $P_a\colon L^1[\underline x_b,\overline x_b]\to L^1[\underline x_b,\overline x_b]$, $a\ge 0$. The operators $P_a$ are well defined for $a\in (\underline a,\overline a)$ but we extend the definition of $P_a$ setting $P_af\equiv 0$ for others $a$'s.
For each $a$ the operator $P_a$ is linear and \textit{po\-si\-tive}, i.e. if $f\ge 0$, then $P_af\ge 0$. Moreover $\|P_af\|_{L^1}\le \|f\|_{L^1}$. The adjoint operator of $P_a$ acts on the space $L^{\infty}[\underline x_b,\overline x_b]$ and it is given by $P_a^*f(x_b)=f(S_a(x_b))=f(\frac12\pi_ax_b)$ for $x_b\ge x_a$ and $P_a^*f(x_b)=0$ for $x_b< x_a$.
We denote by $u(t,x_b,a)$ the number of individuals in a population having initial size $x_b$ and age $a$ at time $t$. Then, according to our assumptions concerning the model, $p(x_b,a)u(t,x_b,a)\Delta t$ is the number of cells of initial size $x_b$ and age $a$ which split in a time interval of the length $\Delta t$. It means that $2\Delta t\int_0^{\infty}\Big(P_a\big(p(\cdot,a)u(t,\cdot,a)\big)\Big)(x_b)\,da $ is the number of new born cells in this time interval. It should be noted that the operator $P_a$ in the last integral acts on the function $\psi(x_b)=p(x_b,a)u(t,x_b,a)$ at fixed values $t$ and $a$. If there are no limitations concerning the growth of the population and all cells split, then the function $u$ satisfies the following initial-boundary problem: \begin{align} \label{eq1} &\frac{\partial u}{\partial t}(t,x_b,a)
+\frac{\partial u}{\partial a}(t,x_b,a)
=-p(x_b,a)u(t,x_b,a),\quad a<\overline a(x_b),\\ &u(t,x_b,0)=2\int_0^{\infty}\Big(P_a\big(p(\cdot,a)u(t,\cdot,a)\big)\Big)(x_b)\,da, \label{eq2}\\ &u(0,x_b,a)=u_0(x_b,a). \label{eq3} \end{align}
We assume that $u_0$ is a nonnegative measurable function such that \begin{equation} \label{def-u_0} \int_{\underline x_b}^{\overline x_b}\int_0^{\infty} u_0(x_b,a)\Psi(x_b,a)\,da\,dx_b<\infty, \end{equation} where $\Psi(x_b,a)=\exp\big(\int_0^a p(x_b,\bar a)\,d\bar a\big)$. In (\ref{def-u_0}) we have assumed that the initial condition $u_0$ is integrable with weight $\Psi(x_b,a)=\Phi(x_b,a)^{-1}$ because $\Phi(x_b,a)$ is the fraction of cells which will survive beyond age $a$. Since $\overline a(x_b)$ is the maximum age of a cell with initial size $x_b$, it is reasonable to consider variables $x_b$ and $a$ only from the set \[ X=\{(x_b,a)\colon \underline x_b\le x_b\le\overline x_b,\,\,\, 0\le a\le\overline a(x_b)\} \] (see Fig.~\ref{r:cell-cyc3}). \begin{figure}
\caption{The set $X$.}
\label{r:cell-cyc3}
\end{figure} Though we consider $a\le\overline a(x_b)$, it will be convenient to keep the notation of integral $\int_0^{\infty}$ with respect to $a$ as in formula (\ref{eq2}) assuming that $u(t,x_b,a)=0$ for $a> \overline a(x_b)$.
Let $\mathcal B(X)$ be the $\sigma$-algebra of Borel subsets of $X$, $\ell$ be the Lebesgue measure on $X$, and $E$ be
the space $L^1(X)=L^1(X,\mathcal B(X),\ell)$. By $\|\cdot\|_E$ we denote the norm in $E$.
The main purpose of the paper is to show that the solutions of the system~\eqref{eq1}--\eqref{eq3} have asynchronous exponential growth \eqref{AEG1}. The AEG property can be written in the following way: \begin{equation} \label{AEG11} \lim_{t\to\infty} e^{-\lambda t}u(t,x_b,a)=\alpha(u_0)v(x_b,a), \end{equation} where the limit is in the space $E$, $\alpha$ is a linear and bounded functional on $E$, and $v\in E$ does not depend on $u_0$ (see Theorem~\ref{th:long-time-u}). We prove this fact under an additional assumption that $g(2x)\ne 2g(x)$ for some $x$. The schedule of the proof is the following. In Section~\ref{exist-sol} we replace the system~\eqref{eq1}--\eqref{eq3} by one in which the first equation of the system has zero on the right-hand side. Then we construct a $C_0$-semigroup of positive operators $\{T(t)\}_{t\ge 0}$ on the space $L^1(X)$ corresponding to the new system. In Section~\ref{s:eigen} we prove that the infinite\-simal generator $\mathcal A$ of the semigroup $\{T(t)\}_{t\ge 0}$ and the adjoint operator $\mathcal A^*$ have positive
eigenvectors, $f_i$ and $v$, respectively, for some eigenvalue $\lambda$. In Section~\ref{s:asyp-beh} we introduce a semigroup $\{P(t)\}_{t\ge 0}$ given by $P(t)f=e^{-\lambda t}T(t)f$ defined on the space $E_1=L^1(X,\mathcal B(X),\mu)$ with the measure $\mu$ given by $d\mu=v\, dx_bda$. We check that $\{P(t)\}_{t\ge 0}$ is a stochastic semigroup on $E_1$ and that $f_i$ is the unique invariant density of $\{P(t)\}_{t\ge 0}$. We also formulate some general theorem concerning asymptotic stability of stochastic semigroups. We apply this theorem to the semigroup $\{P(t)\}_{t\ge 0}$ and prove its asymptotic stability. We translate this result in terms of the semigroup $\{T(t)\}_{t\ge 0}$. Finally we return to the semigroup $\{U(t)\}_{t\ge 0}$ generated by the system~\eqref{eq1}--\eqref{eq3} and we show that it has the AEG property.
Most of experiments concerning microorganisms are conducted in chemo\-stats, where cells can be grown in a physiological steady state under constant environmental conditions. Then cells are removed from the system with the outflow with rate $D$. In this case we add to the right-hand side of equation (\ref{eq1}) the term $-Du(t,x_b,a)$. Similarly, if cells die with rate $d(t,x_b,a)$, then we add to the right-hand side of (\ref{eq1}) the term $-d(t,x_b,a)u(t,x_b,a)$. One can consider more advanced models with cellural competition, but in this case all functions $q$, $g$, and $d$ can depend also on the total number of cells and we do not investigate such models.
\section{A semigroup approach}
\label{exist-sol} It is convenient to substitute $z(t,x_b,a)=u(t,x_b,a)\Psi(x_b,a)$ and $z_0(x_b,a)=u_0(x_b,a)\Psi(x_b,a)$ in (\ref{eq1})--(\ref{eq3}). Then the system (\ref{eq1})--(\ref{eq3}) takes the form \begin{align} \label{eq1-n} &\frac{\partial z}{\partial t}(t,x_b,a)
+\frac{\partial z}{\partial a}(t,x_b,a) =0,\quad a<\overline a(x_b),\\ &z(t,x_b,0)=2\int_0^{\infty}\Big(P_a\big(q(\cdot,a)z(t,\cdot,a)\big)\Big)(x_b)\,da, \label{eq2-n}\\ &z(0,x_b,a)=z_0(x_b,a). \label{eq3-n} \end{align} Observe that $z_0$ is an integrable function. Since the expression on the right-hand side of (\ref{eq2-n}) will be used quite often, so instead of it, we will use the shortened notation $\mathcal Pz(t,x_b)$. Thus, equation (\ref{eq2-n}) takes the form $z(t,x_b,0)=\mathcal Pz(t,x_b)$. We also use a simplified notation $\mathcal Pf(x_b)$ for the expression $2\int_0^{\infty}\Big(P_a\big(q(\cdot,a)f(\cdot,a)\big)\Big)(x_b)\,da$.
We consider the solutions of (\ref{eq1-n})--(\ref{eq3-n}) as continuous functions $z\colon [0,\infty)\to E$ defined by $z(t)(x_b,a)=z(t,x_b,a)$, where $z(t)$ is the solution of the evolution \[ z'(t)=\mathcal Az(t),\quad z(0)=z_0, \] with the operator \[ \mathcal Af(x_b,a)=-\frac{\partial f}{\partial a}(x_b,a)
\] which has the domain
\[
\mathcal D(\mathcal A) =\Big\{f\in E,\quad \frac{\partial f}{\partial a}\in E,\quad f(x_b,0)=\mathcal Pf(x_b)\Big\}. \]
Since a function $f\in E$ is only almost everywhere defined we need to clarify the formula for $f(x_b,0)$. Consider the (partial) Sobolev space \[ W_1(X)=\Big\{f\in E\colon \frac{\partial f}{\partial a}\in E\Big\} \]
with the norm $\|f\|_{W_1(X)}=\|f\|_E+\Big\|\dfrac{\partial f}{\partial a}\Big\|_E$. In the space $W_1(X)$ we introduce a trace operator $\mathcal T\colon W_1(X)\to L^1[\underline x_b,\overline x_b]$, $\mathcal Tf(x_b)=f(x_b,0)$, in the following way. First we show that there exists a constant $c>0$ such that for each function $f\in W_1(X)\cap C(X)$ we have \[
\|\mathcal Tf\|_{L^1[\underline x_b,\overline x_b]}\le c\|f\|_{W_1(X)} \] (see \cite{Evans} Theorem 1, Chapter 5.5). Since the set $W_1(X)\cap C(X)$ is dense in $W_1(X)$ we can extend $\mathcal T$ uniquely to a linear bounded operator on the whole space $W_1(X)$. \begin{proposition} \label{D(A)} The operator $\mathcal A$ with the domain \[ \mathcal D(\mathcal A)=\big\{f\in W_1(X)\colon \mathcal Tf =\mathcal Pf \big\} \] generates a positive $C_0$-semigroup $\{T(t)\}_{t\ge 0}$ on $E$. \end{proposition} We recall that a family $\{T(t)\}_{t\ge0}$ of linear operators on a Banach space $E$ is a $C_0$-\textit{semigroup} or \textit{strongly continuous semigroup} if it satisfies the following conditions:
\begin{enumerate}[\rm(a)] \item \ $T(0)=I$, i.e., $T(0)f =f$ for $f\in E$, \item \ $T(t+s)=T(t) T(s)\quad \textrm{for}\quad s,\,t\ge0$, \item \ for each $f\in E$ the function $t\mapsto T(t)f$ is continuous. \end{enumerate}
The proof of this result is based on a perturbation method related to operators with boundary conditions developed in \cite{greiner} and an extension of this method to unbounded perturbations in $L^1$ space in \cite{GMTK} (see Theorem~\ref{MTK} below).
\begin{theorem} \label{MTK} Let $(\Gamma,\Sigma,m)$, $(\Gamma_{\partial},\Sigma_{\partial},m_{\partial})$ be $\sigma$-finite measure spaces and let $L^1=L^1(\Gamma,\Sigma,m)$ and $L_{\partial}^1=L^1(\Gamma_{\partial},\Sigma_{\partial},m_{\partial})$. Let $\mathcal{D}$ be a linear subspace of~$L^1$. We assume that $A\colon \mathcal D\to L^1$ and $\Upsilon_0,\Upsilon\colon \mathcal D\to L_{\partial}^1$ are linear operators satisfying the following conditions: \begin{enumerate}[\rm(1)] \item for each $\lambda>0$, the operator $\Upsilon_0\colon \mathcal{D}\to L^1_{\partial}$ restricted to the nullspace $\mathcal N(\lambda I-A)=\{f\in\mathcal D\colon \lambda f- Af=0\}$ has a positive right inverse $\Upsilon(\lambda)\colon L^1_{\partial}\to \mathcal N(\lambda I-A)$, i.e. $\Upsilon_0\Upsilon(\lambda)f_{\partial}=f_{\partial}$ for $f_{\partial} \in L^1_{\partial}$; \item
the operator $\Upsilon\colon \mathcal{D} \to L^1_{\partial}$ is positive and there is $\omega>0$ such that $\|\Upsilon \Upsilon(\lambda)\|<1$ for $\lambda>\omega$;
\item the operator $A_0=A\big|_{\mathcal D(A_0)} $, where $\mathcal{D}(A_0)=\{f\in \mathcal{D}\colon \Upsilon_0f=0\}$,
generates a positive $C_0$-semigroup on $L^1$; \item $\int_{\Gamma} Af(x)\,m(dx)\le \int_{\Gamma_{\partial}} \Upsilon_0 f(x_\partial)\,m_{\partial}(dx_\partial)$ for $f\in \mathcal{D}_+=\{f\in \mathcal{D}\colon f\ge 0\}$. \end{enumerate} Then $A$ with the domain $\mathcal{D}(A)=\{f\in \mathcal{D}\colon \Upsilon_0 f=\Upsilon f\} $ generates a positive semigroup on $L^1$. \end{theorem}
\begin{proof}[Proof of Proposition~$\ref{D(A)}$] First we translate our notation to that from Theorem~\ref{MTK}. Let $\Gamma=X$, $\Gamma_{\partial}=[\underline x_b,\overline x_b]$, $L^1=E$, $L_{\partial}^1=L^1[\underline x_b,\overline x_b]$, $\Upsilon_0=\mathcal T$, $\Upsilon =\mathcal P$, $Af=-\frac{\partial f}{\partial a}$ and $\mathcal D=\Big\{f\in E\colon\,\, \frac{\partial f}{\partial a}\in E\}$.
(1): Since $Af=-\frac{\partial f}{\partial a}$, the nullspace $\mathcal N(\lambda I-A)$ is the set of functions $f\in \mathcal D$ satisfying equation
\[ \frac{\partial f}{\partial a} +\lambda f=0. \] Solving this equation we obtain that $f(x_b,a)=f(x_b,0)e^{-\lambda a}$. Thus the operator $\Upsilon_0$ restricted to $\mathcal N(\lambda I-A)$ is invertible and the inverse operator $\Upsilon(\lambda)\colon L^1[\underline x_b,\overline x_b] \to \mathcal N(\lambda I-A)$ given by $\Upsilon(\lambda)f(x_b,a) =f(x_b)e^{-\lambda a}$ is positive.
(2): Since $\Upsilon=\mathcal P$ we check that $\|\mathcal P\Upsilon(\lambda)\|<1$ for $\lambda>\omega=\ln2/\underline a$. Take $f\in L^1[\underline x_b,\overline x_b]$, $f\ge 0$, and let $\Theta_{\lambda}(x_b,a)=2e^{-\lambda a}q(x_b,a)$. Then \begin{align*} \int_{\underline x_b}^{\overline x_b}(\mathcal P\Upsilon(\lambda)f)(x_b)\,dx_b&=\int_{\underline x_b}^{\overline x_b}\int_0^{\infty} P_a (f(\cdot)\Theta_{\lambda}(\cdot,a))(x_b)\,da\,dx_b\\ &\le \int_{\underline x_b}^{\overline x_b}\int_0^{\infty}f(x_b)\Theta_{\lambda}(x_b,a)\,da\,dx_b. \end{align*} Since \[ \int_0^{\infty}\Theta_{\lambda}(x_b,a)\,da=\int_{\underline a}^{\overline a}2e^{-\lambda a}q(x_b,a)\,da\le 2e^{-\lambda \underline a}<1 \]
for $\lambda>\omega=\ln2/\underline a$, we have $\|\mathcal P\Upsilon(\lambda)\|<1$ for $\lambda>\omega$.
(3): The operator $A_0$ generates a positive $C_0$-semigroup $\{T_0(t)\}_{t\ge 0}$ on $E$ given by \[ T_0(t)f(x_b,a)= \begin{cases} f(x_b,a-t)\quad\textrm{for $a> t$,}\\ 0 \quad\textrm{for $a<t$}. \end{cases} \]
(4): If $f\in \mathcal{D}_+$ then \[ \int_X Af(x_b,a)\,dx_b\,da =-\int_X\frac{\partial f}{\partial a}(x_b,a) \,da \,dx_b =\int_{\underline x_b}^{\overline x_b}\mathcal Tf(x_b)\,dx_b. \] Since $\Upsilon_0=\mathcal T$, $\Gamma=X$, and $\Gamma_{\partial}=[\underline x_b,\overline x_b]$ we have \[ \int_X Af(x_b,a)\,dx_b\,da-\int_{\underline x_b}^{\overline x_b} \Upsilon_0 f(x_b,a)\,dx_b=0.\qedhere \] \end{proof}
\begin{remark} \label{aler-proof} It is not difficult to check that the resolvent $R(\lambda,\mathcal A)=(\lambda I-\mathcal A)^{-1}$ exists for sufficiently large $\lambda>0$ and it is given by the formula \begin{equation} \label{R(l,a)} R(\lambda,\mathcal A)=(I-\mathcal P_{\lambda})^{-1}R(\lambda,A_0), \end{equation} where
\[ \mathcal P_{\lambda}f(x_b,a)=e^{-\lambda a}\mathcal Pf(x_b), \quad R(\lambda,A_0)f(x_b,a)=\int_0^af(x_b,r)e^{\lambda(r-a)}\,dr \] for $f\in E$. Another proof of Proposition~$\ref{D(A)}$ can be done directly using formula \eqref{R(l,a)} and the Hille--Yosida theorem. \end{remark}
\begin{remark} \label{classical-solution} If $z_0\in \mathcal D(\mathcal A)$ and $z(t)=T(t)z_0$, then $z(t)\in \mathcal D(\mathcal A)$, $z'(t)$ exists and $z'(t)=\mathcal Az(t)$. Assume that the function $z_0$ and the derivative $\dfrac{\partial z_0}{\partial a}$ are continuous bounded functions and the consistency condition $z_0(x_b,0)=\mathcal Pz_0(x_b)$ holds. Then the problem (\ref{eq1-n})--(\ref{eq3-n}) has a unique classical solution. By the \textit{classical solution} we understand a continuous function $z$, which has continuous derivatives $\dfrac{\partial z}{\partial a}$ and $\dfrac{\partial z}{\partial t}$ outside the set $\mathcal Z=\{(t,x_b,a)\colon a=t,\,(x_b,a)\in X\}$, $z$ satisfies (\ref{eq1-n}) outside $\mathcal Z$, and $z$ satisfies conditions (\ref{eq2-n})--(\ref{eq3-n}). In this case $z_0\in \mathcal D(\mathcal A)$ and $z(t,x_b,a)= T(t)z_0(x_b,a)$, i.e. classical and ``semigroup" solutions are identical. \end{remark}
\section{Eigenvectors of $\mathcal A$ and $\mathcal A^*$}
\label{s:eigen}
Our aim is to study the long-time behaviour of the semigroup $\{T(t)\}_{t\ge 0}$. The strategy is the following. First we check that the adjoint operator $\mathcal A^*$ of $\mathcal A$ has a positive eigenvector $v=v(x_b,a)$ corresponding to some positive eigenvalue $\lambda$. Then we introduce the semigroup $\{P(t)\}_{t\ge 0}$ given by $P(t)f=e^{-\lambda t}T(t)f$ defined on the space $E_1=L^1(X,\mathcal B(X),\mu)$ with the measure $\mu$ given by $d\mu=v\, d\ell$. Then we prove that semigroup $\{P(t)\}_{t\ge 0}$ has an invariant density $f_i$ and $\lim_{t\to\infty} P(t)f=f_i$ for each density $f$. Finally, we translate this result in terms of the semigroup $\{T(t)\}_{t\ge 0}$.
We first study some properties of the adjoint operator of $\mathcal A$. Denote by $H$ the operator $H\colon E\to L^1[\underline x_b,\overline x_b]$ defined by $Hf(x_b)=\mathcal Pf(x_b)$. Then the operator $H^*\colon L^{\infty}[\underline x_b,\overline x_b] \to E^*$ is given by \begin{equation} \label{H*} H^*f(x_b,a)=2q(x_b,a)P^*_af(x_b)=2q(x_b,a)f(S_a(x_b)). \end{equation} It should be noted that we omit in the last product the factor $\mathbf 1_{[x_a,\overline x_b]}(x_b)$ because $q(x_b,a)=0$ for $x_b\le x_a$.
\begin{lemma} \label{eigA*} Let \begin{equation*}
\mathcal D^{\odot}=\{f\in C(X),\,\,\, \frac{\partial f}{\partial a}\in C(X),\,\,\,f(x_b,\overline a(x_b))=0,\,\,\,\frac{\partial f}{\partial a}(x_b,\overline a(x_b))=0\}. \end{equation*} Then $\mathcal D^\odot\subset \mathcal D(\mathcal A^*)$ and \begin{equation} \label{eigenvec-A} \mathcal A^*f=\frac{\partial f}{\partial a}+H^*{\tilde f}\quad\textrm{for $f\in \mathcal D^{\odot}$}, \end{equation} where $\tilde f(x_b)=f(x_b,0)$. \end{lemma} \begin{proof} If $f\in \mathcal D^{\odot}$ and $\varphi \in \mathcal D(\mathcal A)$ then \[ \begin{aligned} \langle f,\mathcal A\varphi\rangle &=-\int_{\underline x_b}^{\overline x_b}\int_0^{\overline a(x_b)} f(x_b,a)\frac{\partial \varphi}{\partial a}(x_b,a)\,da\,dx_b\\ &=\int_{\underline x_b}^{\overline x_b} f(x_b,0)\varphi(x_b,0)\,dx_b+ \Big\langle \frac{\partial f}{\partial a},\varphi\Big\rangle \\ &=\int_{\underline x_b}^{\overline x_b} f(x_b,0)(H\varphi)(x_b)\,dx_b+ \Big\langle \frac{\partial f}{\partial a},\varphi\Big\rangle \\ &=\langle H^*\tilde f,\varphi\rangle+ \Big\langle \frac{\partial f}{\partial a},\varphi\Big\rangle= \Big\langle \frac{\partial f}{\partial a}+ H^*\tilde f,\varphi\Big\rangle. \end{aligned} \] Thus $\mathcal D^{\odot}\subset \mathcal D(\mathcal A^*)$ and \eqref{eigenvec-A} holds. \end{proof}
Now we will check that the operator $\mathcal A^*$ has a positive eigenvector $v\in \mathcal D^{\odot}$ for some eigenvalue $\lambda >0$. Let $\mathcal A^*v=\lambda v$. From (\ref{eigenvec-A}) it follows that \begin{equation} \label{H*2} \lambda v -\frac{\partial v}{\partial a}=H^*{\tilde v}. \end{equation} The problem is that the adjoint semigroup $\{T^*(t)\}_{t\ge 0}$ is not continuous. Instead of this semigroup we may use the sun dual semigroup, but it will be more convenient to consider a little modification of the sun dual semigroup. Consider a semigroup $\{T^{\odot}(t)\}_{t\ge 0}$ defined on the space \[ \widetilde C(X)=\{f\in C(X)\colon\,\,\, f(x_b,\overline a(x_b))=0\} \] with the infinitesimal generator $\mathcal A^{\odot}$ with the domain $\mathcal D(\mathcal A^{\odot})=D^{\odot}$ and given by the same formula as $\mathcal A^*$. Let $\mathcal A^{\odot}_0f=\frac{\partial f}{\partial a}$ and $\mathcal D(\mathcal A^{\odot}_0)=\mathcal D(\mathcal A^{\odot})$. Then $\mathcal A_0^{\odot}$ is the infinitesimal generator of a $C_0$-semigroup $\{T^{\odot}_0(t)\}_{t\ge 0}$ on the space $\widetilde C(X)$ given by the formula \[
T^{\odot}_0(t)f(x_b,a)=\begin{cases} f(x_b,a+t)&\textrm{for $a\le \overline a(x_b)-t$},\\ 0&\textrm{for $a> \overline a(x_b)-t$}. \end{cases} \] Let $H^{\odot}\colon C[\underline x_b,\overline x_b]\to \widetilde C(X)$ be given by $H^{\odot}f(x_b,a)=2q(x_b,a)f(S_a(x_b))$. Then $\mathcal A^{\odot}f=\mathcal A^{\odot}_0f+H^{\odot}\tilde f$ for $f\in \mathcal D(\mathcal A^{\odot})$. Denote by $R(\lambda,\mathcal A^{\odot}_0)$ the resolvent of the operator $\mathcal A^{\odot}_0$.
\begin{lemma} \label{K-lambda} Let $K_{\lambda}\colon C[\underline x_b,\overline x_b]\to C[\underline x_b,\overline x_b]$, $\lambda\ge 0$, be the integral operator given by \begin{equation} \label{eigenvec-A4} K_{\lambda}\tilde v(x_b)=\int_{\underline a(x_b)}^{\overline a(x_b)}2e^{-\lambda a}q(x_b,a)\tilde v(S_a(x_b))\,da. \end{equation} If $\tilde v$ is a function such that $K_{\lambda}\tilde v=\tilde v$, then the function \begin{equation} \label{eigenvec-A2} v(x_b,a)=\int_a^{\infty}H^{\odot}\tilde v(x_b,s)e^{-\lambda(s-a)}\,ds. \end{equation} satisfies {\rm(\ref{H*2})} and $v\in \mathcal D(\mathcal A^{\odot})$. \end{lemma} \begin{proof} If $v\in \widetilde C(X)$ satisfies the equation \begin{equation} \label{H*3} v=R(\lambda,\mathcal A^{\odot}_0)H^{\odot}{\tilde v} \end{equation} then $v$ also satisfies (\ref{H*2}). Since $R(\lambda,\mathcal A^{\odot}_0)f=\int_0^{\infty} e^{-\lambda s}T^{\odot}_0(s)f\,ds$ we have \[ R(\lambda,\mathcal A^{\odot}_0)f(x_b,a) =\int_a^{\infty}f(x_b,s)e^{-\lambda(s-a)}\,ds. \] Now (\ref{H*3}) can be written as the integral equation \eqref{eigenvec-A2}. Observe that in order to find $v(x_b,a)$ it is enough to solve (\ref{eigenvec-A2}) for $a=0$. Equation (\ref{eigenvec-A2}) for $a=0$ takes the form \begin{equation*}
v(x_b,0)=\int_0^{\infty}H^{\odot}\tilde v(x_b,s) e^{-\lambda s}\,ds. \end{equation*} In the above formula we replace $s$ by $a$ and apply (\ref{H*}). Then \[ v(x_b,0)=\int_{\underline a(x_b)}^{\overline a(x_b)}2e^{-\lambda a}q(x_b,a)v(S_a(x_b),0)\,da. \] Thus if $K_{\lambda}\tilde v=\tilde v$, then $v$ given by \eqref{eigenvec-A2} satisfies \eqref{H*2}. Since $v$ belongs to the range of resolvent $R(\lambda,\mathcal A^{\odot}_0)$, we have
$v\in \mathcal D(\mathcal A^{\odot}_0)=\mathcal D(\mathcal A^{\odot})$. \end{proof}
We want to prove that there exists a constant $\lambda>0$ and a positive function $\tilde v\in C[\underline x_b,\overline x_b]$ such that $K_{\lambda}\tilde v=\tilde v$. We split the proof of this fact into two lemmae.
\begin{lemma} \label{lemma-eigenfunction1} For each $\lambda\ge 0$ the spectral radius $r(K_{\lambda})$ of $K_{\lambda}$ is a positive, isolated and simple eigenvalue of $K_{\lambda}$ associated with a positive eigenfunction $\tilde v_{\lambda}\in C[\underline x_b,\overline x_b]$. \end{lemma} \begin{proof} In order to check this property we write the operator $K_{\lambda}$ in the standard integral form. We substitute $y(a)=S_a(x_b)$ in (\ref{eigenvec-A4}). Then $da=2\,dy/g(2y)$ and we find that the operator $K_{\lambda}$ can be written in the form \[ K_{\lambda}\tilde v(x_b)= \int_{\underline x_b}^{\overline x_b} k_{\lambda}(x_b,y)\tilde v(y)\,dy,\quad k_{\lambda}(x_b,y)=\frac{4e^{-\lambda a(y;x_b)}q(x_b,a(y;x_b))}{g(2y)}, \] where \[ a(y;x_b)=\int_{x_b}^{2y}\frac{dr}{g(r)}. \] The expression $a(y;x_b)$ has the following interpretation. If $x_b$ is the initial size of a mother cell and it splits at the age $a(y;x_b)$, then $y$ is the initial size of its daughter cells. Since the function $g$ is continuous and positive, the kernel $k_{\lambda}$ is continuous and nonnegative. Moreover, $k_{\lambda}(x_b,x_b)>0$ for all $x_b\in (\underline x_b,\overline x_b)$. Indeed, $k_{\lambda}(x_b,x_b)>0$ if and only if $q(x_b,a(x_b;x_b))>0$. The last inequality follows from (A6)
(see Fig.~\ref{r:cell-cyc2}). This implies that the spectral radius $r(K_{\lambda})$ is a positive, isolated and simple eigenvalue of $K_{\lambda}$ associated with an eigenfunction $\tilde v_{\lambda}\in C[\underline x_b,\overline x_b]$ such that $\tilde v_{\lambda}(x_b)>0$ for $x_b\in (\underline x_b,\overline x_b)$ (see comments after the proof of Theorem 7.4 of \cite{AL}). Observe that $\tilde v_{\lambda}$ is also positive at $\underline x_b$ and $\overline x_b$, because $\tilde v_{\lambda}(y)>0$ for $y\in(\underline x_b,\overline x_b)$ and the functions $y\mapsto q(\underline x_b,a(y;\underline x_b))$ and $y\mapsto q(\overline x_b,a(y;\overline x_b))$ are positive on some nontrivial intervals. \end{proof}
\begin{lemma} \label{lemma-eigenfunction2} There exists $\lambda>0$ such that $r(K_{\lambda})=1$. \end{lemma} \begin{proof} First we check that the function $\lambda\mapsto K_{\lambda}$ is continuous with respect to the operator norm. Indeed, let $\lambda_1\le \lambda_2$ and $T_{\lambda_1,\lambda_2}=K_{\lambda_1}-K_{\lambda_2}$. Since $\int q(x_b,a)\,da=1$ and $e^{-\lambda_1 a}-e^{-\lambda_2 a}\le \overline a(\lambda_2-\lambda_1)$, we have \[
\|T_{\lambda_1,\lambda_2}\|\le \max\limits_{\underline x_b\le x_b\le\overline x_b} \int_{\underline a(x_b)}^{\overline a(x_b)}2\big( e^{-\lambda_1 a}-e^{-\lambda_2 a}\big) q(x_b,a)\,da\le 2\overline a(\lambda_2-\lambda_1). \] Since the function $k_{\lambda}$ is continuous, the operator $K_{\lambda}\colon C[\underline x_b,\overline x_b]\to C[\underline x_b,\overline x_b]$ is compact for each $\lambda\ge 0$. The spectral radius mapping restricted to compact linear bounded operators on any Banach space is continuous with respect to the operator norm (see e.g. Theorem~2.1 of \cite{Degla}). Thus the function
$\lambda\mapsto r(K_{\lambda})$ is continuous. Observe that, $r(K_0)=2$, because $\|K_0\|=2$ and $K_0\mathbf 1_{[\underline x_b,\overline x_b]}=2\cdot \mathbf 1_{[\underline x_b,\overline x_b]}$. Now, let $\bar \lambda>0$ be a constant such that $e^{-\bar\lambda \underline a}\le 1/4$. Then \[
K_{\bar\lambda} \tilde v(x_b)\le \frac12 \int_{\underline a(x_b)}^{\overline a(x_b)}q(x_b,a)\tilde v(S_a(x_b))\,da\le \frac12\|\tilde v\| \quad \textrm{for $\tilde v\ge 0$,} \]
and consequently $r(K_{\bar\lambda}) \le \|K_{\bar\lambda}\|\le \frac12$. From the continuity of the function $\lambda\mapsto r(K_{\lambda})$ it follows that there exists a $\lambda\in (0,\bar\lambda)$ such that $r(K_{\lambda})=1$. \end{proof} Now we apply formula (\ref{eigenvec-A2}) to find a nonnegative eigenfunction $v(x_b,a)$ of the operator $\mathcal A^*$.
\begin{proposition} \label{prop-eig1} The operator $\mathcal A^*$ has an eigenvalue $\lambda>0$ and a corresponding eigenfunction $v$ such that \begin{equation} \label{eigenfun-A*} c_1\Phi(x_b,a)\le v(x_b,a)\le c_2 \Phi(x_b,a) \end{equation} for some positive constants $c_1$ and $c_2$ independent of $x_b$ and $a$. \end{proposition}
\begin{proof} According to Lemma~\ref{lemma-eigenfunction2} there exists $\lambda>0$ such that $r(K_{\lambda})=1$. Let $\tilde v_{\lambda}$ be a positive fixed point of $K_{\lambda}$. Then from formulae (\ref{H*}) and (\ref{eigenvec-A2}) it follows that \[ v(x_b,a)=\int_a^{\infty} 2q(x_b,s)\tilde v_{\lambda}(S_s(x_b))e^{-\lambda(s-a)}\,ds \] is the eigenfunction of the operator $\mathcal A^*$ corresponding to $\lambda$. Since the functions $\tilde v_{\lambda}(S_s(x_b))$ and $e^{-\lambda(s-a)}$ are bounded above and bounded away from zero, there exist positive constants $c_1$ and $c_2$ such that \[ c_1\int_a^{\infty} q(x_b,s)\,ds \le v(x_b,a)\le c_2\int_a^{\infty} q(x_b,s)\,ds \] and (\ref{eigenfun-A*}) follows from the definition of $\Phi$. \end{proof}
From now on $\lambda$ denotes the eigenvalue from Proposition~\ref{prop-eig1}. \begin{lemma} \label{l:opJ} A function $f_i(x_b,a)$ is an eigenvector of $\mathcal A$ corresponding to $\lambda$ if and only if \begin{equation} \label{A-Eigenv1} f_i(x_b,a)=e^{-\lambda a}f_i(x_b,0)\quad\textrm{for}\quad a\le\overline a(x_b) \end{equation} and the function $f(x_b)=f_i(x_b,0)$ satisfies the equation $Jf=f$, where the operator $J\colon L^1[\underline x_b,\overline x_b]\to L^1[\underline x_b,\overline x_b]$ is given by the formula \begin{equation} \label{J-eig1} Jf(x_b)=\int_{\underline x_b}^{2x_b}2e^{-\lambda a(x_b;y)}q(y,a(x_b;y))f(y)\,dy. \end{equation} \end{lemma} \begin{proof} If a function $f_i(x_b,a)$ is an eigenvector of $\mathcal A$ corresponding to $\lambda$, then the function $z(t,x_b,a)=e^{\lambda t} f_i(x_b,a)$ is a solution of (\ref{eq1-n})--(\ref{eq3-n}). Substituting $z=e^{\lambda t} f_i$ into (\ref{eq1-n})--(\ref{eq2-n}) we obtain \begin{equation} \label{J-eig1a} \lambda f_i(x_b,a)+\frac{\partial f_i}{\partial a}(x_b,a)=0,\quad f_i(x_b,0)=\mathcal Pf_i(x_b). \end{equation} From the first of equations \eqref{J-eig1a} it follows that the function $(x_b,a)\mapsto e^{\lambda a} f_i(x_b,a)$ has zero partial derivative with respect to $a$. Thus $e^{\lambda a} f_i(x_b,a)=f_i(x_b,0)$, where $f_i(x_b,0)$ is the value of the trace operator $\mathcal T$ on $f_i$. Therefore \eqref{A-Eigenv1} holds and the function $x_b\mapsto f_i(x_b,0)$ satisfies the following integral equation \begin{equation} \label{A-Eigenv2} f_i(x_b,0)=\int_0^{\infty}2e^{-\lambda a}\Big(P_a\big(q(\cdot,a)f_i(\cdot,0)\big)\Big)(x_b)\,da. \end{equation} Since \[ \Big(P_a\big(q(\cdot,a)f(\cdot)\big)\Big)(x_b) =\frac{d}{dx_b}\big(S_a^{-1}(x_b)\big)q\big(S_a^{-1}(x_b),a\big)f\big(S_a^{-1}(x_b)\big),
\] the substitution $y=S_a^{-1}(x_b)$ to \eqref{A-Eigenv2} gives \eqref{J-eig1}. \end{proof} From \eqref{J-eig1} it follows that $J$ is an integral operator with a continuous kernel. In particular $Jf\in C[\underline x_b,\overline x_b]$ for $f\in L^1[\underline x_b,\overline x_b]$ and the operator $J$ restricted to $C[\underline x_b,\overline x_b]$ is a continuous and positive.
The operators $J$ and $K_{\lambda}$ are adjoint, i.e. \begin{equation*}
\int_{\underline x_b}^{\overline x_b} g(x_b)Jf(x_b)\,dx_b=\int_{\underline x_b}^{\overline x_b} K_{\lambda}g(x_b)f(x_b)\,dx_b \end{equation*} for $f\in L^1[\underline x_b,\overline x_b]$, $g\in L^{\infty}[\underline x_b,\overline x_b]$.
\begin{lemma} \label{lemma-eigenfunction-J1} There exists $\tilde f_i\in C[\underline x_b,\overline x_b]$ such that $J\tilde f_i= \tilde f_i$ and $\tilde f_i(x_b)>0$ for $x_b\in (\underline x_b,\overline x_b)$. The function $\tilde f_i$ is the unique, up to a multiplicative constant, fixed point of $J$. \end{lemma} \begin{proof} Using the same arguments as in Lemma~\ref{lemma-eigenfunction1} we prove that there exists an eigenfunction $\tilde f_i\in C[\underline x_b,\overline x_b]$ of $J$ such that $\tilde f_i(x_b)>0$ for $x_b\in (\underline x_b,\overline x_b)$. This eigenfunction is indeed a fixed point of $J$ because $\langle J\tilde f_i,\tilde v_{\lambda}\rangle=\langle \tilde f_i,K_{\lambda}\tilde v_{\lambda}\rangle =\langle \tilde f_i,\tilde v_{\lambda}\rangle$. Since $r=1$ is an isolated and simple eigenvalue of $J$, the function $\tilde f_i$ is the unique, up to a multiplicative constant, fixed point of~$J$. \end{proof}
\begin{remark} \label{positivivty-boundary} It is generally not true that $\tilde f_i(\underline x_b)>0$ and $\tilde f_i(\overline x_b)>0$. If we assume additionally that $S_{\underline a(x_b)}(x_b)=\underline x_b$ for $x_b\in [\underline x_b,\underline x_b+\delta]$, $\delta>0$, then $\tilde f_i(\underline x_b)>0$ because a mother cell with the initial size $x_b\in [\underline x_b,\underline x_b+\delta]$ can have a daughter cell with the initial size $\underline x_b$. Analogously, if $S_{\overline a(x_b)}(x_b)=\overline x_b$ for $x_b\in [\overline x_b-\delta,\overline x_b]$, $\delta>0$, then $\tilde f_i(\overline x_b)>0$. \end{remark}
From Lemma~\ref{lemma-eigenfunction-J1} and from formulae (\ref{A-Eigenv1}) and (\ref{A-Eigenv2}) we have \begin{proposition} \label{prop-eig2} Let $\tilde f_i$ be the function from Lemma~$\ref{lemma-eigenfunction-J1}$. If $f_i(x_b,a)=e^{-\lambda a}\tilde f_i(x_b)$, then $\mathcal Af_i=\lambda f_i$. The function $f_i$ is the unique, up to a multiplicative constant, eigenfunction of $\mathcal A$ corresponding to the eigenvalue $\lambda$.
\end{proposition}
\section{Asymptotic behaviour}
\label{s:asyp-beh} We precede the formulation of the main result of this section by some definitions and some general theorem concerning asymptotic stability of stochastic semigroups.
Let a triple $(X,\Sigma,m)$ be a $\sigma$-finite measure space. Denote by $D$ the subset of the space $L^1=L^1(X,\Sigma,m)$ which contains all densities \[
D=\{f\in \, L^1\colon \,f\ge 0,\,\, \|f\|=1\}. \] A $C_0$-semigroup $\{P(t)\}_{t\ge 0}$ of linear operators on $L^1$ is called \textit{stochastic semigroup} or \textit{Markov semigroup} if $P(t)(D)\subseteq D$ for each $t\ge 0$.
A stochastic semigroup $\{P(t)\}_{t\ge 0}$ is {\it asymptotically stable} if there exists a density $f_i$ such that \begin{equation} \label{d:as}
\lim _{t\to\infty}\|P(t)f-f_i\|=0 \quad \text{for}\quad f\in D. \end{equation}
From (\ref{d:as}) it follows immediately that $f_i$ is {\it invariant\,} with respect to $\{P(t)\}_{t\ge 0}$, i.e. $P(t)f_i=f_i$ for each $t\ge 0$.
A stochastic semigroup $\{P(t)\}_{t\ge 0}$ is called {\it partially integral} if there exists a measurable function $k\colon (0,\infty)\times X\times X\to[0,\infty)$, called a {\it kernel}, such that \begin{equation*}
P(t)f(x)\ge\int_X k(t,x,y)f(y)\,m (dy) \end{equation*} for every density $f$ and \begin{equation*} \int_X\int_X k(t,x,y)\,m(dx)\,m(dy)>0 \end{equation*} for some $t>0$. The following result was proved in \cite{PR-jmaa2}.
\begin{theorem} \label{asym-th2} Let $\{P(t)\}_{t\ge 0}$ be a partially integral stochastic semigroup. Assume that the semigroup $\{P(t)\}_{t\ge 0}$ has a unique invariant density $f_i$. If $f_i>0$ a.e., then the semigroup $\{P(t)\}_{t\ge 0}$ is asymptotically stable. \end{theorem}
New results concerning positive operators on Banach lattices similar in spirit to Theorem~\ref{asym-th2} may be found in \cite{Gerlach-Gluck1,Martin-Gluck2}.
Investigation of the long-time behaviour of the semigroup $\{T(t)\}_{t\ge 0}$ can be reduced to the study of asymptotic stability of some stochastic semigroup. Let $\lambda$ and $v$ be the eigenvalue and the eigenfunction from Proposition~\ref{prop-eig1}. We define a semigroup $\{P(t)\}_{t\ge 0}$ as the extension of semigroup $\{e^{-\lambda t}T(t)\}_{t\ge 0}$ to the space $E_1=L^1(X,\mathcal B(X),\mu)$ with measure $\mu$ given by $d\mu=v\, d\ell$. Observe that we can indeed extend the semigroup $\{e^{-\lambda t}T(t)\}_{t\ge 0}$ to a stochastic semigroup on $E_1$. Since $\mathcal A^*v=\lambda v$, we have $T^*(t)v=e^{\lambda t} v$. If $f\in E$ then $P(t)f=e^{-\lambda t}T(t)f$ and \begin{align*} &\iint\limits_X P(t)f(x_b,a)\,\mu(dx_b,da)=\iint\limits_X e^{-\lambda t}T(t)f(x_b,a) v(x_b,a)\,dx_b\,da\\ &=\iint\limits_X f(x_b,a) e^{-\lambda t}T^*(t)v(x_b,a)\,dx_b\,da =\iint\limits_X f(x_b,a) v(x_b,a)\,dx_b\,da\\ &=\iint\limits_X f(x_b,a) \,\mu(dx_b,da). \end{align*} Since the function $v$ is bounded and positive almost everywhere, $E$ is dense in $E_1$. If $f\in E_1$, we choose a sequence $(f_n)$ from $E$ such that $f_n\to f$ in $E_1$ and define $P(t)f=\lim\limits_{n\to\infty} P(t)f_n$ in $E_1$. Since the operators $P(t)$ are positive and preserve the integral with respect to $\mu$, this extension is uniquely defined and $\{P(t)\}_{t\ge 0}$ is a stochastic semigroup on $E_1$.
In order to prove asymptotic stability of the semigroup $\{P(t)\}_{t\ge 0}$ we need to add an additional assumption concerning function $g$:
\noindent (A7) there exists $x\in (\underline x_b,\overline x_b)$ such that $g(2x)\ne 2g(x)$.
We precede the formulation of a theorem on asymptotic stability of $\{P(t)\}_{t\ge 0}$ by the following lemma. \begin{lemma} \label{lemma-partially-integral} Assume {\rm (A1)--(A7)}. Then the semigroup $\{T(t)\}_{t\ge 0}$ is partially integral. \end{lemma} \begin{proof} Observe that the operator $T(t)$ has the kernel $k(t,x,y)$ if and only if the operator $T^*(t)$ has the kernel $k^*(t,x,y)=k(t,y,x)$. Thus, in order to prove that the semigroup $\{T(t)\}_{t\ge 0}$ is partially integral it is sufficient to check that the semigroup $\{T^{\odot}(t)\}_{t\ge 0}$ has the similar property. The semigroup $\{T^{\odot}(t)\}_{t\ge 0}$ is given by the \textit{Dyson-Phillips expansion} \begin{equation} \label{eq:dpf1b} T^{\odot}(t)f=\sum_{n=0}^\infty T^{\odot}_n(t)f, \end{equation} where \begin{equation*}
T^{\odot}_{n+1}f(t)= \int_0^tT^{\odot}_{0}(\tau)\mathcal HT^{\odot}_n(t-\tau)f\,d\tau, \quad n\ge 0, \end{equation*} where $\mathcal Hf(x_b,a)=2q(x_b,a)f(S_a(x_b),0)$ and $T^{\odot}_0(t)f(x_b,a)=f(x_b,a+t)$ for $a\le \overline a(x_b)-t$.
Since $\mathcal HT^{\odot}_0(t-\tau)f(x_b,a)= 2q(x_b,a)f(S_a(x_b),t-\tau)$, we have \begin{align*} T^{\odot}_1f(t)(x_b,a)& =\int_0^tT^{\odot}_{0}(\tau)\mathcal HT^{\odot}_0(t-\tau)f(x_b,a)\,d\tau\\ &=\int_0^t 2q(x_b,a+\tau)f(S_{a+\tau}(x_b),t-\tau) \,d\tau. \end{align*} Analogously, since \[ T^{\odot}_1(t-\tau_1)f(x_b,a)=\int_0^{t-\tau_1} 2q(x_b,a+\tau)f(S_{a+\tau}(x_b),t-\tau_1-\tau) \,d\tau, \] we have \[ \mathcal HT^{\odot}_1(t-\tau_1)f(x_b,a)=2q(x_b,a)\int_0^{t-\tau_1} 2q(S_a(x_b),\tau)f(S_{\tau}(S_a(x_b)),t-\tau_1-\tau) \,d\tau, \] and finally
\begin{align*} T^{\odot}_2f(t)(x_b,a)& =\int_0^tT^{\odot}_{1}(\tau_1)\mathcal HT^{\odot}_1(t-\tau_1)f(x_b,a)\,d\tau_1\\ &=\int_0^t 2q(x_b,a+\tau_1)\int_0^{t-\tau_1} 2q(S_{a+\tau_1}(x_b),\tau)\\ &\hspace{3cm}{}\cdot f(S_{\tau}(S_{a+\tau_1}(x_b)),t-\tau_1-\tau) \,d\tau \,d\tau_1. \end{align*} We substitute in the last integral $\tilde x=S_{\tau}(S_{a+\tau_1}(x_b))$ and $\tilde a=t-\tau_1-\tau$. Then \begin{align*} \frac{\partial \tilde x}{\partial \tau}&=\frac12g\big(S_{\tau}(S_{a+\tau_1}(x_b))\big), \\ \frac{\partial \tilde x}{\partial \tau_1}&=\frac12 \frac{g(\pi_{\tau}S_{a+\tau_1}(x_b))}{g(S_{a+\tau_1}(x_b))} \cdot \frac12 g\big(S_{a+\tau_1}(x_b)\big)= \frac14g(\pi_{\tau}S_{a+\tau_1}(x_b)),\\ \frac{\partial \tilde a}{\partial \tau}&= \frac{\partial \tilde a}{\partial \tau_1}=-1. \end{align*} Let $\mathcal J_{\tau,\tau_1}$ be the Jacobian matrix of the transformation $(x_b,a)\mapsto (\tilde x,\tilde a)$. Then \[ \det \mathcal J_{\tau,\tau_1}(x_b,a)=\frac14g(\pi_{\tau}S_{a+\tau_1}(x_b))-\frac12g\big(S_{\tau}(S_{a+\tau_1}(x_b))\big). \] According to (A7) there exists $x\in (\underline x_b,\overline x_b)$ such that $g(2x)\ne 2g(x)$. We find $x^1_b \in (\underline x_b,\overline x_b)$ and $\tau^0>0$ such that $q(x_b^1,\tau^0)>0$ and $S_{\tau^0}(x_b^1)=x$, i.e.
$x$ and $x_b^1$ are the initial sizes of daughter and mother cells.
Next we find $x^0_b \in (\underline x_b,\overline x_b)$, $a^0>0$, and $\tau_1^0>0$ such that $q(x_b^0,a^0+\tau_1^0)>0$ and $S_{a^0+\tau_1^0}(x_b^0)=x^1_b$. We also choose $t>0$ such that the point $(x,t-\tau^0-\tau_1^0)$ lies in the interior of the set $X$. Then we find a neighbourhood $\mathcal U$ of the point $(\tau^0,\tau_1^0,x_b^0,a^0)$ such that $\det \mathcal J_{\tau,\tau_1}(x_b,a)\ne 0$, $q(x_b,a+\tau_1)>0$, $q(S_{a+\tau_1}(x_b),\tau)>0$, $(S_{\tau}(S_{a+\tau_1}(x_b)),t-\tau-\tau_1)\in X$ for $(\tau,\tau_1,x_b,a)\in \mathcal U$. Thus there exist neighbourhoods $V_1$ and $V_2$ of the points $(x_b^0,a^0)$ and $(x,t-\tau_1^0-\tau^0)$ and there exist $\varepsilon>0$ and a nonnegative kernel $k(t,x_b,a,\tilde x,\tilde a)$ such that $k(t,x_b,a,\tilde x,\tilde a)\ge \varepsilon$ for $(x_b,a,\tilde x,\tilde a)\in V_1\times V_2$ and \[ T^{\odot}_2f(t)(x_b,a)\ge \iint\limits_{X} k(t,x_b,a,\tilde x,\tilde a)f(\tilde x,\tilde a) \,d\tilde x\,d\tilde a. \] From (\ref{eq:dpf1b}) it follows that the semigroups $\{T^{\odot}(t)\}_{t\ge 0}$ and $\{T(t)\}_{t\ge 0}$ are partially integral. \end{proof}
\begin{theorem} \label{asym-as-2} Assume {\rm (A1)--(A7)}. Then the semigroup $\{P(t)\}_{t\ge 0}$ is asymptotically stable. The eigenfunction of $\mathcal A$ from Proposition~$\ref{prop-eig2}$ is the invariant density of $\{P(t)\}_{t\ge 0}$. \end{theorem} \begin{proof} We need to check that the semigroup $\{P(t)\}_{t\ge 0}$ satisfies assumptions of Theorem~\ref{asym-th2}. Since the semigroup $\{T(t)\}_{t\ge 0}$ is partially integral and $P(t)f=e^{-\lambda t}T(t)f$ for $f\in E$, the same property has the semigroup $\{P(t)\}_{t\ge 0}$. According to Proposition~$\ref{prop-eig2}$ there exists a function $f_i$ such that $\mathcal Af_i=\lambda f_i$ and $f_i>0$ a.e. As $\mu(X)<\infty$ and $f_i$ is bounded, $f_i$ is integrable with respect to $\mu$. Since the eigenfunction is determined up to a multiplicative constant, we may assume that $\int_X f_i\,d\mu=1$. Also according to Proposition~$\ref{prop-eig2}$ the function $f_i$ is the unique invariant density of $\{P(t)\}_{t\ge 0}$. \end{proof} \begin{theorem} \label{th:long-time-u} For every $u_0\in E$ we have \begin{equation} \label{d:as4} \lim _{t\to\infty}e^{-\lambda t}U(t)u_0=\Phi f_i\iint\limits_X u_0(x_b,a)\Psi(x_b,a)v(x_b,a)\,dx_b\,da \quad \textrm{in $E$}. \end{equation} Moreover, $\Phi f_i$ and $\Psi v$ are eigenfunctions of the semigroups $\{U(t)\}_{t\ge 0}$ and $\{U^*(t)\}_{t\ge 0}$ corresponding to the eigenvalue $\lambda$. \end{theorem} \begin{proof} Condition of asymptotic stability of the semigroup $\{P(t)\}_{t\ge 0}$ can be written in the following equivalent form: for every $f\in E_1$ we have \begin{equation} \label{d:as1} \lim _{t\to\infty}P(t)f=f_i\iint\limits_X f(x_b,a)v(x_b,a)\,dx_b\,da \quad \textrm{in $E_1$}. \end{equation} We can extend the semigroup $\{T(t)\}_{t\ge 0}$ to a $C_0$-semigroup on $E_1$ setting $T(t)f=e^{\lambda t}P(t)f$ for $f\in E_1$. From (\ref{d:as1}) it follows that \begin{equation} \label{d:as2} \lim _{t\to\infty}e^{-\lambda t}T(t)f=f_i\iint\limits_X f(x_b,a)v(x_b,a)\,dx_b\,da \quad \textrm{in $E_1$}. \end{equation} Now we return to the problem (\ref{eq1})--(\ref{eq3}). We recall that after substitution $z(t,x_b,a)=u(t,x_b,a)\Psi(x_b,a)$ and $z_0(x_b,a)=u_0(x_b,a)\Psi(x_b,a)$ we have replaced this problem by the system (\ref{eq1-n})--(\ref{eq3-n}) and the semigroup $\{T(t)\}_{t\ge 0}$ describes the evolution of the solutions of this system. Since $z=u\Psi$ and $z_0=u_0\Psi$, we have $u(t)=\Phi T(t)(u_0\Psi)$ because $\Phi=1/\Psi$. Thus we can define a semigroup $\{U(t)\}_{t\ge 0}$ corresponding to (\ref{eq1})--(\ref{eq3}) by \begin{equation} \label{d:as-defU} U(t)u_0=\Phi T(t)(u_0\Psi). \end{equation} From inequalities (\ref{eigenfun-A*}) it follows that $u_0\Psi\in E_1$ if and only if $u_0\in E$ and $\{U(t)\}_{t\ge 0}$ is a $C_0$-semigroup on the space $E$. It should be noted that we consider solutions of (\ref{eq1})--(\ref{eq3}) for a wider class of initial conditions because we do not assume that $u_0$ satisfies inequality (\ref{def-u_0}). From (\ref{d:as2}) it follows that \begin{equation*}
\lim _{t\to\infty}e^{-\lambda t}\Psi U(t)u_0=f_i\iint\limits_X u_0(x_b,a)\Psi(x_b,a)v(x_b,a)\,dx_b\,da \quad \textrm{in $E_1$}. \end{equation*} Using again inequalities (\ref{eigenfun-A*}) we finally obtain \eqref{d:as4}. \end{proof}
Property (\ref{d:as4}) is called the asynchronous exponential growth of the semigroup $\{U(t)\}_{t\ge 0}$. Precisely, we say that a semigroup $\{U(t)\}_{t\ge 0}$ on a Banach space $\mathbb X$ has \textit{asynchronous} (or \textit{balanced}) \textit{exponential growth} if there exist $\lambda\in\mathbb C$, a nonzero $x_i\in \mathbb X$, and a nonzero linear functional $\alpha\colon \mathbb X\to \mathbb C$ such that \begin{equation*}
\lim_{t\to\infty}e^{-\lambda t}U(t)x=x_i\alpha(x)\quad\textrm{for $x\in\mathbb X$}. \end{equation*} It should be mentioned that one can find in literature, e.g. \cite{Webb-aeg}, a more general definition of asynchronous exponential growth, where it is only assumed that $e^{-\lambda t}U(t)x$ converges to a nonzero finite rank operator.
\section{Remarks}
\label{s:remarks} \subsection{Chemostat}
\label{ss:chemostat}
In Section~\ref{s:model} we have mentioned that if we consider experiment in a chemostat, then we need to add to the right-hand side of equation (\ref{eq1}) the term $-Du(t,x_b,a)$. In this case we substitute $u(t,x_b,a)=e^{-Dt}\bar u(t,x_b,a)$ and then we check that the function $\bar u$ satisfies the system (\ref{eq1})--(\ref{eq3}).
From Theorem~\ref{th:long-time-u} we deduce that \begin{equation} \label{d:as5} \lim _{t\to\infty}e^{(D-\lambda)t}U(t)u_0=\Phi f_i\iint\limits_X u_0(x_b,a)\Psi(x_b,a)v(x_b,a)\,dx_b\,da \quad \textrm{in $E$}. \end{equation} From~(\ref{d:as5}) it follows that in order to grow cells under constant environmental conditions, cells should be removed from the system with rate $D=\lambda$.
\subsection{Age-size structured model}
\label{ss:age-size} Now we consider an age-size structured model consistent with our biological description. Let $\bar p(x,a)\Delta t$ be the probability that a cell with size $x$ and age $a$ splits in the time interval of the length $\Delta t$. Since such a cell had the initial size $x_b=\pi_{-a}x$, we see that \begin{equation*}
\bar p(x,a)=p(\pi_{-a}x,a)= q(\pi_{-a}x,a)\big/\textstyle{\int_a^{\infty}} q(\pi_{-a}x,r)\,dr \end{equation*} for $a<\overline a(\pi_{-a}x)$. We set $\bar p(x,a)=0$ for $a\ge \overline a(\pi_{-a}x)$. Let $w(t,x,a)$ be the number of cells having size $x$ and age $a$ at time $t$. Then the function $w$ satisfies the following initial-boundary problem: \begin{align} \label{eq1-w} &\frac{\partial w}{\partial t}(t,x,a)
+\frac{\partial w}{\partial a}(t,x,a)
+\frac{\partial (gw)}{\partial x}(t,x,a)
=-\bar p(x,a)w(t,x,a),\\ &w(t,x,0)=4\int_{0}^{\infty}\bar p(2x,a)w(t,2x,a)\,da, \label{eq2-w}\\ &w(0,x,a)=w_0(x,a). \label{eq3-w} \end{align} We have the following relationship between solutions of the systems (\ref{eq1})--(\ref{eq3}) and (\ref{eq1-w})--(\ref{eq3-w}): \begin{equation} \label{relacja-u-w} \int_0^a\int_0^x u(t,x_b,r)\,dx_b\,dr=\int_0^a\int_0^{\pi_rx} w(t,y,r)\,dy\,dr. \end{equation} Differentiating both sides of (\ref{relacja-u-w}) with respect to $a$ and $x$ we obtain \[ u(t,x,a)=\frac{\partial (\pi_ax)}{\partial x} w(t,\pi_a x,a)
=\frac{g(\pi_ax)}{g(x)} w(t,\pi_a x,a). \] Using the above formula and Theorem~\ref{th:long-time-u} we get \[ e^{-\lambda t}\frac{g(\pi_ax)}{g(x)} w(t,\pi_a x,a)\to (\Phi f_i)(x,a) \iint\limits_X w_0(\pi_ax_b,a)h(x_b,a)\,dx_b\,da \] in $E$ as $t\to\infty$, where $h(x_b,a)=\Psi(x_b,a)v(x_b,a)g(\pi_ax_b)/g(x_b)$. Finally we conclude that \[ e^{-\lambda t} w(t,x,a)\to h_i(x,a) \iint\limits_X w_0(\pi_ax_b,a)h(x_b,a)\,dx_b\,da \] in $L^1$, where $h_i(x,a)=(\Phi f_i)(\pi_{-a}x,a)g(\pi_{-a}x)/g(x)$.
\subsection{Solutions with values in the space of measures}
\label{ss:slution-in-measures} If we study the dynamics of population growth of microorganisms starting from a single cell, then initial distribution of the population is described by a singular measure, precisely with a delta Dirac measure. Thus it is natural to consider a model which describes the evolution of measures instead of $L^1$ functions. We can introduce such a model by considering weak solutions. Let $\{U(t)\}_{t\ge 0}$ be the semigroup introduced in Section~\ref{s:asyp-beh} and let $\{U^{\odot}(t)\}_{t\ge 0}$ be the ``dual semigroup" given by $U^{\odot}(t)u_0=\Psi T^{\odot}(t)(\Phi u_0)$ (see formula (\ref{d:as-defU})). Denote by $\mathcal M(X)$ the space of all finite Borel measures on $X$. For any measure $\nu_0\in \mathcal M(X)$ we define the \textit{weak solution} of the problem (\ref{eq1})--(\ref{eq3}) as a function $u\colon [0,\infty)\to \mathcal M(X)$, $u(t)=\nu_t$, where the measures $\nu_t$ satisfy the condition \begin{equation} \label{measure-sol-def} \iint\limits_X f(x_b,a)\,\nu_t(dx_b,da)=\iint\limits_X U^{\odot}(t)f(x_b,a)\,\nu_0(dx_b,da)
\end{equation}
for all $f\in C(X)$. Since the set $X$ is compact, the existence and uniqueness of the measures $\nu_t$ is a simple consequence of the Riesz representation theorem.
One can ask about the long-time behaviour of the measures $\nu_t$. We are interested in convergence of measures in the total variation norm. We denote by $d(\nu,\bar\nu)_{TV}$ the distance between $\nu$ and $\bar\nu$ in the \textit{total variation norm} in $\mathcal M(X)$. We recall that \[ d(\nu,\bar\nu)_{TV}=(\nu-\bar\nu)^+(X)+(\nu-\bar\nu)^-(X), \] where the symbols $\nu^+$ and $\nu^-$ denote the positive and negative part of a signed measure $\nu$. We can formulate Theorem~\ref{th:long-time-u} in a slightly stronger form: \begin{proposition} \label{prop:long-time-u-tv} Assume that conditions {\rm (A1)--(A7)} hold. Let $\nu_0\in \mathcal M(X)$ and let $\nu_{\infty}$ be the measure given by \[ \nu_{\infty}(A)=\int\limits_A \Phi(x_b,a) f_i(x_b,a) \,dx_b\,da\cdot \iint\limits_X\Psi(x_b,a)v(x_b,a)\,\nu_0(dx_b,\,da) \] for $A\in\mathcal B(X)$. Then \begin{equation} \label{measure-sol-converg} \lim_{t\to\infty}d(e^{-\lambda t}\nu_t,\nu_{\infty})_{TV}=0. \end{equation} \end{proposition} We only give some idea of the proof of Proposition~\ref{prop:long-time-u-tv}. We consider weak solutions connected with the semigroup $\{P^{\odot}(t)\}_{t\ge 0}$, i.e. we replace in (\ref{measure-sol-def}) the semigroup
$\{U^{\odot}(t)\}_{t\ge 0}$ by $\{P^{\odot}(t)\}_{t\ge 0}$, where $P^{\odot}(t)=e^{-\lambda t}T^{\odot}(t)$. It is enough to check that if $\nu_0$ is a probability measure then $\lim_{t\to\infty}d(\nu_t,\mu_*)_{TV}=0$, where $d\mu_*=f_id\mu$. We write $\nu_t$ as a sum $\nu_t^a+\nu_t^s$, where $\nu_t^a$ is the absolutely continuous part of $\nu_t$ with respect to the Lebesgue measure and $\nu_t^s$ is the singular part of $\nu_t$. We deduce from conditions (A6) and (A7) that there exist $t_0>0$ and $\varepsilon>0$ independent of $\nu_0$ such that $\nu^s_{t_0}(X)\le 1-\varepsilon$. The proof of this part is very technical but it uses similar arguments as the proof of Lemma~\ref{lemma-partially-integral}. From the last inequality it follows that
$\nu^s_{t}(X)\le (1-\varepsilon)^n$ for $t\ge nt_0$. Fix $\eta>0$ and let $t_1>0$ be such that $\nu^s_{t_1}(X)\le\eta$. Let $f=d\nu_{t_1}^a/d\mu$. Then $\lim_{t\to\infty}\|P(t)f-f_i\int f\,d\mu\|_{E_1}=0$. Since $\int f\,d\mu\ge 1-\eta$ and $\nu^s_{t_1}(X)\le\eta$, we have $d(\nu_t,\mu_*)_{TV}\le 2\eta$. As $\eta>0$ can be chosen arbitrary small we finally obtain $\lim_{t\to\infty}d(\nu_t,\mu_*)_{TV}=0$.
It should be noted that a similar result can be obtained by using the theory of positive recurrent and aperiodic Harris processes (see Theorem 13.3.3 of~\cite{Meyn-Tweedie}), but to apply this theorem we need to formulate the problem properly in the language of stochastic processes. First, we construct a family of Markov processes corresponding to the semigroup $\{P(t)\}_{t\ge 0}$. In particular, we need to define additionally the processes started from points $(x_b,\overline a(x_b))$. Since Theorem 13.3.3 applies to discrete-time processes, we consider this Markov family for times $t=0,1,2,\dots$ and check assumptions of this theorem. As a result we obtain that $\lim_{n\to\infty}d(\nu_n,\mu_*)_{TV}=0$. Finally, we pass from discrete time convergence to continuous time convergence.
\subsection{Case $g(2x)=2g(x)$}
\label{ss:g(2x)=2g(x)}
A function $g$ satisfying condition $g(2x)=2g(x)$ for all $x\in [\underline x_b,\overline x_b]$ can be constructed in the following way. Let $g\colon [\underline x_b,2\underline x_b]\to (0,\infty)$ be a given $C^1$-function such that $g(2\underline x_b)=2g(\underline x_b)$ and $g'(2\underline x_b)=g'(\underline x_b)$. Then we define $g(x)=2^ng(2^{-n}x)$ for $ x\in [2^n \underline x_b,2^{n+1}\underline x_b]$.
Observe that if $g(2x)=2g(x)$ for all $x\in [\underline x_b,\overline x_b]$, then the semigroup $\{U(t)\}_{t\ge 0}$ has no asynchronous exponential growth. Indeed, consider a cell with initial size $x_b$. Fix time $t>0$ and assume that the cell splits at age $a\le t$. Then the daughter cells at time $t$ have size \[ x(a)=\pi_{t-a}(\tfrac 12\pi_ax_b). \] Since \[ x'(a)= -g(\pi_{t-a}(\tfrac 12\pi_ax_b)) + \frac{g(\pi_{t-a}(\tfrac 12\pi_ax_b))} {g(\tfrac 12\pi_ax_b)} \cdot \frac12g(\pi_ax_b)=0, \] the function $x$ is constant and $x(a)=\tfrac 12\pi_tx_b$. Thus the size of all daughter cells is exactly twice smaller than the size of the mother cell. If $x_n(t)$ is the size of a cell from the $n$th generation then its mother, grandmother, etc. cells have sizes $2x_n(t)$, $4x_n(t)$, $\dots$ But since cells have minimum and maximum size $\underline x_b$ and $2\overline x_b$, the maximum number of existing generations at a given time $t$ is not greater than $2+\log_2(\overline x_b/\underline x_b)$. Moreover, if $x_1\in (\underline x_b,2\underline x_b)$ and $f_1(x_b,a)=\mathbf 1_{(\underline x_b,x_1)}(x_b)$, $f_2(x_b,a)=\mathbf 1_{(x_1,2\underline x_b)}(x_b)$, then $U(t)f_1\cdot U(t)f_2\equiv 0$ for all $t\ge 0$. Consequently, the semigroup $\{U(t)\}_{t\ge 0}$ has no asynchronous exponential growth.
\begin{remark} \label{r:irred-overlaping} One can check that the semigroup $\{U(t)\}_{t\ge0}$ is \textit{irreducible} i.e. $\int_0^{\infty}U(t)f\,dt>0$ a.e. even if (A7) does not hold. Our example shows that a semigroup can be irreducible but not overlapping supports. A stochastic semigroup $\{P(t)\}_{t\ge0}$ is called \textit{overlapping supports} if $P(t)f_1\cdot P(t)f_2\ne 0$ for any two densities $f_1$ and $f_2$ and some $t=t(f_1,f_2)$. Another simple example of irreducible stochastic semigroup which does not overlap supports is the rotation semigroup. If $X=S^1$ is a unit circle on the complex plain with centre $z_0=0$, $\,\Sigma=\mathcal B(X)$ is the $\sigma$-algebra of Borel subsets of $X$ and $m$ is the arc-Lebesgue measure on $X$, the rotation semigroup $\{P(t)\}_{t\ge 0}$ is given by $P(t)f(z)=f(ze^{it})$. \end{remark}
Now we consider a special case when $g(x)=\kappa x$, $\kappa>0$. We start at time $t=0$ with a single cell with size $x$. Cells from the $n$th generation have size $2^{-n}e^{\kappa t}x$ at time $t$. Then $\bar p(2^{-n}e^{\kappa t}x,a)\Delta t$ is the probability that a cell from the $n$th generation with age $a$ splits in the time interval of the length $\Delta t$. This observation allows us to describe the evolution of the population using discrete parameters. Denote by $w_n(t,a)$ the number of cells from the $n$th generation with age $a$ at time $t$. Then the functions $w_n$ satisfy the following infinite system of partial differential equations with boundary conditions: \begin{align*}
&\frac{\partial w_n}{\partial t}(t,a)
+\frac{\partial w_n}{\partial a}(t,a)
=-\bar p\big(2^{-n}e^{\kappa t}x,a\big)w_n(t,a),\\ &w_n(t,0)=2\int_{0}^{\infty}\bar p\big(2^{1-n}e^{\kappa t}x,a\big)w_{n-1}(t,a)\,da.
\end{align*}
It should be noted that it is not easy to find a direct formula for the eigenvector $f_i(x_b,a)$ of the operator $\mathcal A$ even in the case $g(x)=\kappa x$. Indeed, we have $\pi_{-a}(2x_b)=2e^{-\kappa a}x_b$ and \begin{equation*}
\begin{aligned} P_af(x_b)&=\frac{2g(\pi_{-a}(2x_b))}{g(2x_b)}f(\pi_{-a}(2x_b))=\frac{2\pi_{-a}(2x_b)}{2x_b}f(\pi_{-a}(2x_b))\\ &=2e^{-\kappa a}f(2e^{-\kappa a}x_b). \end{aligned}
\end{equation*} Then $f_i(x_b,a)=e^{-\lambda a}f_i(x_b,0)$, where $\lambda$ and $f_i(x_b,0)$ should be found by solving the following equation \begin{equation*}
f_i(x_b,0)=\int_0^{\infty}4e^{-(\lambda+\kappa)a} q(2e^{-\kappa a}x_b,a)f_i(2e^{-\kappa a}x_b,0)\,da, \end{equation*} which is not a simple task.
\section{Comparison with experimental data and other models}
\label{s:other2} Modern experimental techniques enable studies of individual cells growth in well-controlled environments. Especially interesting are experimental results concerning rod-shaped bacteria,
for example \textit{E. coli}, \textit{C. crescentus} and \textit{B. subtilis} \cite{CSK,I-B,T-A,Wang}, because they change only their length. Although such bacteria have similar shape there are variety of distinct models of cell cycle and cell division. For example, we consider models with symmetric or asymmetric divisions, with different velocities of proliferation, deterministic or stochastic growth of individuals or models based on special assumptions as fixed cell length extension or models with target size division.
We give a short review of such models and show how to incorporate them to our model.
\subsection{Models with exponential growth} \label{ss:exponential growth} Since experimental data suggest that cells grow exponentially, one can find a number of models with the assumption $g(x)=\kappa x$
but with various descriptions of the cell cycle length.
In \cite{CSK,GH,T-A,VK} it is considered an \textit{additive model} (or a \textit{constant $\Delta$ model}), where it is assumed that the difference $\Delta(x_b)=x_d-x_b$ between the size at division $x_d$ and the initial size $x_b$ of a cell is a random variable independent of $x_b$. From this assumption it follows that \[ \tau(x_b)=\kappa^{-1}\ln((x_b+\Delta)/x_b). \] If $h(x)$ is the density distribution of $\Delta$, then \begin{equation} \label{q-delta} q(x_b,a)=\kappa x_b e^{\kappa a} h\big(x_b e^{\kappa a}-x_b\big) \end{equation}
is the density of $\tau(x_b)$. We obtain a special case of our model if the density distribution of $\Delta$ is positive on the interval $(\underline x_b,\overline x_b)$. According to experimental data from \cite{T-A} the coefficient of variation $c_v$ of $\Delta$ for \textit{E. coli} is in the range of $0.17$ to $0.28$ depending on the different growth conditions. We recall that $c_v=\sigma/\mu$, where $\sigma$ is the standard deviation, and $\mu$ is the mean.
In \cite{Amir,Jafarpour} it is assumed that a cell with initial size $x_b$ attempts to divide at a target size $x_d = f(x_b)$. Then the expected length of the cell cycle is $\tau_0(x_b)=\kappa^{-1}\ln(f(x_b)/x_b)$, but $\tau_0(x_b)$ is additively perturbed by a symmetric random variable $\xi$, and finally $\tau(x_b)=\tau_0(x_b)+\xi$. If $h(a)$ is the density distribution of $\xi$, then $q(x_b,a)=h(a-\tau_0(x_b))$ is the density of $\tau(x_b)$. The authors assume in these papers that $h$ has a normal distribution but in this case $\tau(x_b)$ can be negative therefore
a truncated normal distribution located in some interval $[-\varepsilon,\varepsilon]$ seems to be more suitable.
They also assume that $f(x_b)=2x_b^{1-\alpha}x_0^{\alpha}$, $\alpha\in [0,1]$ and $x_0>0$.
If $\alpha>0$, $\underline x_b=x_0e^{-\kappa\varepsilon /\alpha}$ and $\overline x_b=x_0e^{\kappa\varepsilon /\alpha}$, then we obtain a particular case of our model.
If $\alpha=0$, then $\tau_0\equiv \kappa^{-1}\ln 2$ and the length of cell cycle does not depend on $x_b$. In this case a daughter cell size is distributed in some neighbourhood of the initial mother cell size, so there is no minimum $\underline x_b$ and maximum size $\overline x_b$.
\subsection{Paradoxes of exponential growth} \label{ss:paradoxes-exp-growth} Models with exponential growth law can lead to some odd mathematical results. If the population starts with a single cell of size $x$, cells from $n$th generation have size $x_n(t)=2^{-n}e^{\kappa t}x$ at time $t$. Since $ \underline x_b\le x_n(t)\le \overline x_b$, population consists of a few generations at each time and all cells in each generation have the same size. Usually the quotient $\overline x_b/\underline x_b$ is not too large. The initial size for \textit{E. coli} under steady-growth conditions is $x_b=2.32\, \pm\, 0.38\,\,\mu m$ (mean\,$\pm$\,SD) \cite{CSK}. Thus we can assume that in this case $\overline x_b/\underline x_b<2$. Then it is easy to check that if \[ t\in \left(\frac {n+\log_2(\overline x_b/x)}{\kappa\log_2e},\frac {1+n+\log_2(\overline x_b/x)}{\kappa\log_2e}\right), \] the population consists only of cells from the $n$th generation, thus all cells have the same size and they cannot split in this time interval. Consequently the size of the population never reaches an exponential (balanced) growth. On this point we also observe that the large quotient $\overline x_b/\underline x_b$ helps the population to stabilize its growth, which explains why in the model with target size division \cite{Jafarpour} it takes the population a longer time to reach its balanced growth for greater $\alpha$, because $\overline x_b/\underline x_b=e^{2\kappa\varepsilon /\alpha}$.
The exponential growth law of cells should be a little bit modified in order to achieve AEG. For example it is enough to assume that $\kappa$ depends on the initial size $x_b$. But according to the experimental results, the average growth rate does not depend on the initial size of cells. On the other hand, even if a population grows under perfect conditions the individual cells have different growth rates: the standard deviation of the growth rate is 15\% of their respective means \cite{T-A}. Thus there is other factor called maturity, which decides about the growth rate of an individual cell. The mathematical models based on the concept of maturity were formulated in the late sixties \cite{LR,Rubinow}. In such models the growth rate is identified with maturation velocity $v$ which is constant during the life of cell and is inherited in a random way from mother to daughter cells.
Rotenberg \cite{Rotenberg} considered a version of maturity models with random jumps of $v$ during the cell cycle. If we replace random jumps of $v$ by stochastic fluctuations of $\kappa$, we obtain a cell growth model described by a stochastic equation considered in the next subsection.
We consider here a simple generalization of our model assuming that $\kappa$ is a random variable with the distribution dependent on $x_b$.
Let the function $r\mapsto k(r|x_b)$ be the density of $\kappa$. The question is how to describe the joint distribution of age and initial size in this case. Equations (\ref{eq1}) and (\ref{eq3}) remain the same and it is enough to derive a version of the boundary condition (\ref{eq2}). Denote by $f(x;x_b,a)$ the density distribution of the random variable $\xi_a^{x_b}=x_be^{\kappa a}$. Then (\ref{eq2}) takes the form \begin{equation} \label{eq2-s} u(t,x,0)=4\iint\limits_X f(2x;x_b,a)p(x_b,a)u(t,x_b,a)\,dx_b\,da. \end{equation} It remains to find the function $f(x;x_b,a)$. We have \[ \operatorname{Prob}(x_be^{\kappa a}\le x)=\operatorname{Prob}\big(\kappa\le a^{-1}\ln(x/x_b)\big)
=\int_0^{a^{-1}\ln(x/x_b)}k(r|x_b)\,dr. \] Hence \[
f(x;x_b,a)=\frac1{ax}k(a^{-1}\ln(x/x_b)|x_b). \] At first glance formulae (\ref{eq2}) and (\ref{eq2-s}) differ significantly, but if we replace in (\ref{eq2-s}) the term $f(2x;x_b,a)$ by the delta Dirac $\delta_{S_a(x_b)}(x)$ we will receive (\ref{eq2}).
\subsection{Stochastic growth of $x$} \label{ss:stochastic-growth} The size of a cell having initial size $x_b$ grows according to It\^o stochastic differential equation \begin{equation} \label{stoch-grow} d\xi_t^{x_b}= \kappa \xi_t^{x_b}\,dt+ \sigma(\xi_t^{x_b})\,dB_t, \end{equation} where $B_t$, $t\ge 0$, is a one dimensional Wiener process (Brownian motion), and $\kappa>0$. In \cite{I-B,PJI-B} the authors assume that $\sigma(x)=\sqrt{D}x^{\gamma}$, where $D>0$ and $\gamma\in (0,1)$. The great strength of this formula is that equation (\ref{stoch-grow}) was intensively studied for such $\sigma$ and we can solve (\ref{stoch-grow}) and find various properties of solutions. But there is one weak point: the size can go to zero and even solutions can be absorbed at zero. To omit this problem we propose to assume that $\sigma\colon [\underline x_b,\infty)\to \mathbb R$ is a $C^1$-function and $\sigma(\underline x_b)=0$. Then $\xi_t^{x_b}>\underline x_b$ for $t>0$. It should be noted that solutions can decrease at some moments, i.e. a cell can shrink, but if the diffusion coefficient $\sigma$ is small, we observe exponential growth with small stochastic noise. If $f(x;x_b,a)$ is the density distribution of the random variable $\xi_a^{x_b}$, then the joint distribution of age and initial size $u(t,x_b,a)$ satisfies equations (\ref{eq1}), (\ref{eq3}), (\ref{eq2-s}).
\subsection{Models with asymmetric division and with slow-fast proliferation} \label{ss:asymetric} A lot of cellular populations are heterogeneous. For example, \textit{C. crescentus} has an asymmetric cell division; \textit{B. subtilis} occasionally produces minicells; melanoma cells have slowly and quickly proliferating cells \cite{Perego}; and precursors of blood cells replicate and maturate going through the levels of morphological development \cite{Marciniak}. It is difficult to find one universal model of the evolution of heterogeneous populations. Now we present a model of the distribution of heterogeneous population based on similar assumptions as the model presented in Section~\ref{s:model}. We divide the population into a number of subpopulations. We assume that cells in the $i$th subpopulation grow according to the equation $x'=g_i(x)$ and their length of the cell cycle has the probability density distribution $q_i(x_b,a)$. We also assume that $r_{ij}$ is the probability that a daughter of a cell from the $i$th subpopulation belongs to the $j$th subpopulation and the daughter has initial size $\beta_{ij}x$, where $x$ is the size of the mother cell at division. As in Section~\ref{s:model} we introduce the function \[ p_i(x_b,a)=\frac{q_i(x_b,a)}{\int_a^{\infty} q_i(x_b,r)\,dr} \] and operators $P^{ij}_a$ which describe the relation between the density of the initial sizes of mother and daughter cells satisfying the equation:
\begin{equation*}
\int_{\underline x_b}^{\beta_{ij}\pi^i_ay}P^{ij}_af(x_b)\,dx_b=r_{ij}\int_{\underline x_b}^{y}f(x_b)\,dx_b.
\end{equation*} Then \begin{equation*}
P^{ij}_af(x_b)=\frac{r_{ij}}{\beta_{ij}}\frac{g(\pi^i_{-a}(x_b/\beta_{ij}))}{g(x_b/\beta_{ij})}f(\pi^i_{-a}(x_b/\beta_{ij})).
\end{equation*} We denote by $u_i(t,x_b,a)$ the number of individuals in the $i$th population having initial size $x_b$ and age $a$ at time $t$. Then the system (\ref{eq1})--(\ref{eq3}) will be replaced by the following one \begin{align} \label{eq1-h} &\frac{\partial u_i}{\partial t}(t,x_b,a)
+\frac{\partial u_i}{\partial a}(t,x_b,a)
=-p_i(x_b,a)u_i(t,x_b,a),\\ &u_j(t,x_b,0)=2\sum\limits_i\int_0^{\infty}P^{ij}_a(p_i(x_b,a)u_i(t,x_b,a))\,da, \label{eq2-h}\\ &u_i(0,x_b,a)=u_{i0}(x_b,a). \label{eq3-h} \end{align}
In some cases of asymmetric division the size of daughter cells is not strictly determined and it is better to consider a model
where the density $k(x_b|x_d)$ describes the distribution of the initial size of a daughter cell $x_b$ if the mother cell has the size $x_d$, see e.g. \cite{AK,GW,Heijmans,KDAT,RP}.
As an example of application of the model (\ref{eq1-h})--(\ref{eq3-h}) we consider \textit{C. crescentus} which has asymmetric cell division into a "stalked" cell which can replicate and a mobile "swarmer" cell which differentiates into a stalked cell after a short period of motility. Thus we have two subpopulations: the first -- stalked cells and the second -- swarmer cells. Then $r_{ij}=1/2$ for $i=1,2$ and $j=1,2$. The stalked daughter has length of $0.56\,\pm\,0.04$ (mean\,$\pm$\,SD) of the mother cell \cite{CSK}. Hence we can assume that $\beta_{11}=\beta_{21}=0.56$ and $\beta_{12}=\beta_{22}=0.44$. If we assume that both stalked and swarmer cells have the same growth rate $\kappa$, i.e. $g_i(x)=\kappa x$, then $q_2(x_b,a)=q_1(x_b,a-\rho)$, where $\rho$ satisfies the formula $e^{\kappa \rho}=0.56/0.44$ and $q_1=q$ is given by (\ref{q-delta}).
A model for the growth \textit{B. subtilis} should be more advanced. \textit{B. subtilis} can divide symmetrically to make two daughter cells (binary fission), but some mutants split asymmetrically, producing a single endospore, which can differentiate to a "typical" cell. Assume that the first population consists of typical cells and the second of minicells. If $\mathfrak p$ is the probability of asymmetric fission, then $r_{11}=r_{21}=1-\mathfrak p+\mathfrak p/2=1-\mathfrak p/2$ and $r_{12}=r_{22}=\mathfrak p/2$. Some information on the size of minicells can be found in \cite{KH}.
In a model which describes slowly and quickly proliferating cells we should assume that the length of the cell cycle of slowly proliferating cells is longer than in quickly proliferating cells and slowly proliferating cells also grow slower. Thus the sensible assumptions are: $g_1(x)<g_2(x)$ and \[ \int_0^a q_1(x_b,r)\,dr <\int_0^aq_2(x_b,r)\,da\quad\text{for $a<\overline a_1(x_b)$}. \] We should also assume that there is some transition between both subpopulations. Other model of the growth of the population with slowly and quickly proliferating cells was recently studied in \cite{Vittadello}.
\section*{Acknowledgments} This research was partially supported by the National Science Centre (Poland) Grant No. 2017/27/B/ST1/00100.
\end{document} | arXiv |
Julie works for 48 hours per week for 12 weeks during the summer, making $\$5000$. If she works for 48 weeks during the school year at the same rate of pay and needs to make another $\$5000$, how many hours per week must she work?
Since she only needs to make the same amount of money, if she works for 4 times as many weeks, she can work 4 times fewer hours per week, meaning she can work $\frac{1}{4} \cdot 48 = \boxed{12}$ hours per week. | Math Dataset |
Entropy dissipation of Fokker-Planck equations on graphs
Emergent dynamics of the Kuramoto ensemble under the effect of inertia
October 2018, 38(10): 4915-4927. doi: 10.3934/dcds.2018214
New characterizations of Ricci curvature on RCD metric measure spaces
Bang-Xian Han ,
Institute for applied mathematics, University of Bonn, Endenicher Allee 60, D-53115 Bonn, Germany
* Corresponding author: Bang-Xian Han
Received September 2017 Revised April 2018 Published July 2018
We prove that on a large family of metric measure spaces, if the $L^p$-gradient estimate for heat flows holds for some $p>2$, then the $L^1$-gradient estimate also holds. This result extends Savaré's result on metric measure spaces, and provides a new proof to von Renesse-Sturm theorem on smooth metric measure spaces. As a consequence, we propose a new analysis object based on Gigli's measure-valued Ricci tensor, to characterize the Ricci curvature of RCD space in a local way. In the proof we adopt an iteration technique based on non-smooth Bakry-Émery theory, which is a new method to study the curvature dimension condition of metric measure spaces.
Keywords: Bakry-Émery theory, curvature dimension condition, gradient estimate, heat flow, metric measure space, Ricci curvature.
Mathematics Subject Classification: Primary: 47D07, 30L99; Secondary: 51F99.
Citation: Bang-Xian Han. New characterizations of Ricci curvature on RCD metric measure spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 4915-4927. doi: 10.3934/dcds.2018214
L. Ambrosio, N. Gigli, A. Mondino and T. Rajala, Riemannian Ricci curvature lower bounds in metric measure spaces with $σ$-finite measure, Trans. Amer. Math. Soc., 367 (2015), 4661-4701. doi: 10.1090/S0002-9947-2015-06111-X. Google Scholar
L. Ambrosio, N. Gigli and G. Savaré, Calculus and heat flow in metric measure spaces and applications to spaces with Ricci bounds from below, Invent. Math., 195 (2014), 289-391. doi: 10.1007/s00222-013-0456-1. Google Scholar
L. Ambrosio, N. Gigli and G. Savaré, Density of Lipschitz functions and equivalence of weak gradients in metric measure spaces, Rev. Mat. Iberoam., 29 (2013), 969-996. doi: 10.4171/RMI/746. Google Scholar
L. Ambrosio, N. Gigli and G. Savaré, Metric measure spaces with Riemannian Ricci curvature bounded from below, Duke Math. J., 163 (2014), 1405-1490. doi: 10.1215/00127094-2681605. Google Scholar
L. Ambrosio, N. Gigli and G. Savaré, Bakry-Émery curvature-dimension condition and Riemannian Ricci curvature bounds, Ann. Probab., 43 (2015), 339-404. doi: 10.1214/14-AOP907. Google Scholar
L. Ambrosio, A. Mondino and G. Savaré, On the Bakry-Émery condition, the gradient estimates and the local-to-global property of $\text{RC}{\text{ D}^{*}}\left( K,N \right)$ metric measure spaces, J. Geom. Anal., 26 (2016), 24-56. doi: 10.1007/s12220-014-9537-7. Google Scholar
D. Bakry, L'hypercontractivité et son utilisation en théorie des semigroupes, in Lectures on probability theory (Saint-Flour, 1992), vol. 1581 of Lecture Notes in Math. Springer, Berlin, 1994, pp. 1-114. doi: 10.1007/BFb0073872. Google Scholar
N. Bouleau and F. Hirsch, Dirichlet Forms and Analysis on Wiener Space, De Gruyter Studies in Mathematics, 14. Walter de Gruyter & Co., Berlin, 1991. doi: 10.1515/9783110858389. Google Scholar
Z.-Q. Chen and M. Fukushima, Symmetric Markov Processes, Time Change, and Boundary Theory, vol. 35 of London Mathematical Society Monographs Series, Princeton University Press, Princeton, NJ, 2012. Google Scholar
N. Gigli, Nonsmooth differential geometry-approach tailored for spaces with Ricci curvature bounded from below, Mem. Amer. Math. Soc., 251 (2018), ⅵ+161pp. Google Scholar
N. Gigli, On the differential structure of metric measure spaces and applications Mem. Amer. Math. Soc., 236 (2015), ⅵ+91pp. doi: 10.1090/memo/1113. Google Scholar
B.-X. Han, Ricci tensor on RCD*(K, N) spaces, J. Geom. Anal., 28 (2018), 1295-1314. doi: 10.1007/s12220-017-9863-7. Google Scholar
J. Lott and C. Villani, Ricci curvature for metric-measure spaces via optimal transport, Ann. of Math. (2), 169 (2009), 903-991. doi: 10.4007/annals.2009.169.903. Google Scholar
M.-K. von Renesse and K.-T. Sturm, Transport inequalities, gradient estimates, entropy, and Ricci curvature, Comm. Pure Appl. Math., 58 (2005), 923-940. doi: 10.1002/cpa.20060. Google Scholar
G. Savaré, Self-improvement of the Bakry-émery condition and Wasserstein contraction of the heat flow in ${RCD(K, ∞)}$ metric measure spaces, Disc. Cont. Dyn. Sist. A, 34 (2014), 1641-1661. doi: 10.3934/dcds.2014.34.1641. Google Scholar
K.-T. Sturm, On the geometry of metric measure spaces Ⅰ, Acta Math., 196 (2006), 65-131. doi: 10.1007/s11511-006-0002-8. Google Scholar
K.-T. Sturm, Ricci tensor for diffusion operators and curvature-dimension inequalities under conformal transformations and time changes, J. Funct. Anal., 275 (2018), 793-829. doi: 10.1016/j.jfa.2018.03.022. Google Scholar
C. Villani, Optimal Transport. Old and New, vol. 338 of Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-71050-9. Google Scholar
Giuseppe Savaré. Self-improvement of the Bakry-Émery condition and Wasserstein contraction of the heat flow in $RCD (K, \infty)$ metric measure spaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1641-1661. doi: 10.3934/dcds.2014.34.1641
Tapio Rajala. Improved geodesics for the reduced curvature-dimension condition in branching metric spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3043-3056. doi: 10.3934/dcds.2013.33.3043
Liangjun Weng. The interior gradient estimate for some nonlinear curvature equations. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1601-1612. doi: 10.3934/cpaa.2019076
Alberto Farina, Enrico Valdinoci. A pointwise gradient bound for elliptic equations on compact manifolds with nonnegative Ricci curvature. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1139-1144. doi: 10.3934/dcds.2011.30.1139
Diego Castellaneta, Alberto Farina, Enrico Valdinoci. A pointwise gradient estimate for solutions of singular and degenerate pde's in possibly unbounded domains with nonnegative mean curvature. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1983-2003. doi: 10.3934/cpaa.2012.11.1983
Yoshikazu Giga, Yukihiro Seki, Noriaki Umeda. On decay rate of quenching profile at space infinity for axisymmetric mean curvature flow. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1463-1470. doi: 10.3934/dcds.2011.29.1463
Hongjie Ju, Jian Lu, Huaiyu Jian. Translating solutions to mean curvature flow with a forcing term in Minkowski space. Communications on Pure & Applied Analysis, 2010, 9 (4) : 963-973. doi: 10.3934/cpaa.2010.9.963
Leif Arkeryd, Raffaele Esposito, Rossana Marra, Anne Nouri. Ghost effect by curvature in planar Couette flow. Kinetic & Related Models, 2011, 4 (1) : 109-138. doi: 10.3934/krm.2011.4.109
Changfeng Gui, Huaiyu Jian, Hongjie Ju. Properties of translating solutions to mean curvature flow. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 441-453. doi: 10.3934/dcds.2010.28.441
Giulio Colombo, Luciano Mari, Marco Rigoli. Remarks on mean curvature flow solitons in warped products. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020153
Paul W. Y. Lee, Chengbo Li, Igor Zelenko. Ricci curvature type lower bounds for sub-Riemannian structures on Sasakian manifolds. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 303-321. doi: 10.3934/dcds.2016.36.303
Jinju Xu. A new proof of gradient estimates for mean curvature equations with oblique boundary conditions. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1719-1742. doi: 10.3934/cpaa.2016010
Pak Tung Ho. Prescribing the $ Q' $-curvature in three dimension. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2285-2294. doi: 10.3934/dcds.2019096
Wen Wang, Dapeng Xie, Hui Zhou. Local Aronson-Bénilan gradient estimates and Harnack inequality for the porous medium equation along Ricci flow. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1957-1974. doi: 10.3934/cpaa.2018093
Tracy L. Payne. The Ricci flow for nilmanifolds. Journal of Modern Dynamics, 2010, 4 (1) : 65-90. doi: 10.3934/jmd.2010.4.65
Joel Spruck, Ling Xiao. Convex spacelike hypersurfaces of constant curvature in de Sitter space. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 2225-2242. doi: 10.3934/dcdsb.2012.17.2225
Matthias Bergner, Lars Schäfer. Time-like surfaces of prescribed anisotropic mean curvature in Minkowski space. Conference Publications, 2011, 2011 (Special) : 155-162. doi: 10.3934/proc.2011.2011.155
Chiara Corsato, Franco Obersnel, Pierpaolo Omari, Sabrina Rivetti. On the lower and upper solution method for the prescribed mean curvature equation in Minkowski space. Conference Publications, 2013, 2013 (special) : 159-169. doi: 10.3934/proc.2013.2013.159
Elias M. Guio, Ricardo Sa Earp. Existence and non-existence for a mean curvature equation in hyperbolic space. Communications on Pure & Applied Analysis, 2005, 4 (3) : 549-568. doi: 10.3934/cpaa.2005.4.549
Qinian Jin, YanYan Li. Starshaped compact hypersurfaces with prescribed $k$-th mean curvature in hyperbolic space. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 367-377. doi: 10.3934/dcds.2006.15.367
HTML views (93)
Bang-Xian Han | CommonCrawl |
\begin{definition}[Definition:Subset]
Let $S$ and $T$ be sets.
$S$ is a '''subset''' of a set $T$ {{iff}} all of the elements of $S$ are also elements of $T$.
This is denoted:
:$S \subseteq T$
That is:
:$S \subseteq T \iff \forall x: \paren {x \in S \implies x \in T}$
If the elements of $S$ are not all also elements of $T$, then $S$ is not a '''subset''' of $T$:
:$S \nsubseteq T$ means $\neg \paren {S \subseteq T}$
\end{definition} | ProofWiki |
Insulators and Metals With Topological Order and Discrete Symmetry Breaking
@article{Chatterjee2017InsulatorsAM,
title={Insulators and Metals With Topological Order and Discrete Symmetry Breaking},
author={Shubhayu Chatterjee and Subir Sachdev},
pages={205133}
S. Chatterjee, S. Sachdev
Numerous experiments have reported discrete symmetry breaking in the high temperature pseudogap phase of the hole-doped cuprates, including breaking of one or more of lattice rotation, inversion, or time-reversal symmetries. In the absence of translational symmetry breaking or topological order, these conventional order parameters cannot explain the gap in the charged fermion excitation spectrum in the anti-nodal region. Zhao et al. (1601.01688) and Jeong et al. (arXiv:1701.06485) have also…
Figures and Tables from this paper
table I
table II
Orbital currents in insulating and doped antiferromagnets
M. S. Scheurer, S. Sachdev
We describe square lattice spin liquids which break time-reversal symmetry, while preserving translational symmetry. The states are distinguished by the manner in which they transform under mirror…
Topological order in the pseudogap metal
M. S. Scheurer, S. Chatterjee, Wei Wu, M. Ferrero, A. Georges, S. Sachdev
It is shown that a theory of a metal with topological order and emergent gauge fields can model much of the numerical data and derive a modified, nonperturbative version of the Luttinger theorem that holds in the Higgs phase.
Topological order, emergent gauge fields, and Fermi surface reconstruction.
S. Sachdev
Reports on progress in physics. Physical Society
This review describes how topological order associated with the presence of emergent gauge fields can reconstruct Fermi surfaces of metals, even in the absence of translational symmetry breaking. We…
Topological order and Fermi surface reconstruction
This review describes how topological order can reconstruct Fermi surfaces of metals, even in the absence of translational symmetry breaking. We begin with an introduction to topological order using…
Intertwining Topological Order and Broken Symmetry in a Theory of Fluctuating Spin-Density Waves.
S. Chatterjee, S. Sachdev, M. S. Scheurer
A SU(2) gauge theory of quantum fluctuations of magnetically ordered states which appear in a classical theory of square lattice antiferromagnets, in a spin-density wave mean field theory of thesquare lattice Hubbard model, and in a CP^{1} theory of spinons is presented.
Thermal and electrical transport in metals and superconductors across antiferromagnetic and topological quantum transitions
S. Chatterjee, S. Sachdev, A. Eberlein
We study thermal and electrical transport in metals and superconductors near a quantum phase transition where antiferromagnetic order disappears. The same theory can also be applied to quantum phase…
Emergent Gapless Fermions in Strongly-Correlated Phases of Matter and Quantum Critical Points
A. Thomson
States with gapless degrees of freedom are typically more complicated and less wellunderstood than systems possessing a gap. In this thesis, we study strongly-correlated systems described by gapless…
Thermal Hall effect in square-lattice spin liquids: A Schwinger boson mean-field study
R. Samajdar, S. Chatterjee, S. Sachdev, M. S. Scheurer
Motivated by recent transport measurements in high-${T}_{c}$ cuprate superconductors in a magnetic field, we study the thermal Hall conductivity in materials with topological order, focusing on the…
Triangular antiferromagnetism on the honeycomb lattice of twisted bilayer graphene
A. Thomson, S. Chatterjee, S. Sachdev, M. S. Scheurer
We present the electronic band structures of states with the same symmetry as the three-sublattice planar antiferromagnetic order of the triangular lattice. Such states can also be defined on the…
Thermodynamic signatures of quantum criticality in cuprate superconductors
B. Michon, C. Girod, +14 authors T. Klein
It is concluded that the pseudogap phase of cuprates ends at a quantum critical point, the associated fluctuations of which are probably involved in d-wave pairing and the anomalous scattering of charge carriers.
Broken rotational symmetry in the pseudogap phase of a high-Tc superconductor
R. Daou, J. Chang, +8 authors L. Taillefer
It is concluded that the pseudogap phase is an electronic state that strongly breaks four-fold rotational symmetry, which narrows the range of possible states considerably, pointing to stripe or nematic order.
Quantum criticality beyond the Landau-Ginzburg-Wilson paradigm
T. Senthil, L. Balents, S. Sachdev, A. Vishwanath, M. Fisher
We present the critical theory of a number of zero-temperature phase transitions of quantum antiferromagnets and interacting boson systems in two dimensions. The most important example is the…
Time-reversal symmetry breaking hidden order in Sr2(Ir,Rh)O4
Jaehong Jeong, Y. Sidis, A. Louat, V. Brouet, P. Bourges
A hidden magnetic order is reported in pure and doped Sr2(Ir,Rh)O4, distinct from the usual antiferromagnetic pseudospin ordering, and it is found that time-reversal symmetry is broken while the lattice translation invariance is preserved in the hidden order phase.
Quantum dimer model for the pseudogap metal
M. Punk, A. Allais, S. Sachdev
This model describes an exotic metal that is similar in many respects to simple metals like silver, however, the simple metallic character coexists with "topological order" and long-range quantum entanglement previously observed only in exotic insulators or fractional quantum Hall states in very high magnetic fields.
Evidence of an odd-parity hidden order in a spin–orbit coupled correlated iridate
L. Zhao, D. Torchinsky, +6 authors D. Hsieh
A rare combination of strong spin–orbit coupling and electron–electron correlations makes the iridate Mott insulator Sr_2IrO_4 a promising host for novel electronic phases of matter. The resemblance…
Spin density wave order, topological order, and Fermi surface reconstruction
S. Sachdev, E. Berg, S. Chatterjee, Y. Schattner
In the conventional theory of density wave ordering in metals, the onset of spin density wave (SDW) order coincides with the reconstruction of the Fermi surfaces into small ``pockets.'' We present…
Superconductivity from a confinement transition out of a fractionalized Fermi liquid with Z2 topological and Ising-nematic orders
S. Chatterjee, Yang Qi, S. Sachdev, Julia Steinberg
The Schwinger-boson theory of the frustrated square lattice antiferromagnet yields a stable, gapped $\mathbb{Z}_2$ spin liquid ground state with time-reversal symmetry, incommensurate spin…
Electronic liquid-crystal phases of a doped Mott insulator
S. Kivelson, E. Fradkin, V. J. Emery
The character of the ground state of an antiferromagnetic insulator is fundamentally altered following addition of even a small amount of charge. The added charge is concentrated into domain walls…
Intra-unit-cell electronic nematicity of the high-Tc copper-oxide pseudogap states
M. Lawler, K. Fujita, +8 authors Eun-Ah Kim
The determination of a quantitative order parameter representing intra-unit-cell nematicity: the breaking of rotational symmetry by the electronic structure within each CuO2 unit cell is reported. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.